00:00:00.002 Started by upstream project "autotest-per-patch" build number 127165 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.075 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.076 The recommended git tool is: git 00:00:00.076 using credential 00000000-0000-0000-0000-000000000002 00:00:00.079 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.105 Fetching changes from the remote Git repository 00:00:00.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.141 Using shallow fetch with depth 1 00:00:00.141 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.141 > git --version # timeout=10 00:00:00.163 > git --version # 'git version 2.39.2' 00:00:00.163 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.187 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.187 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.699 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.711 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.723 Checking out Revision bd3e126a67c072de18fcd072f7502b1f7801d6ff (FETCH_HEAD) 00:00:06.723 > git config core.sparsecheckout # timeout=10 00:00:06.734 > git read-tree -mu HEAD # timeout=10 00:00:06.750 > git checkout -f bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=5 00:00:06.791 Commit message: "jenkins/autotest: add raid-vg subjob to autotest configs" 00:00:06.792 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:06.873 [Pipeline] Start of Pipeline 00:00:06.886 [Pipeline] library 00:00:06.887 Loading library shm_lib@master 00:00:06.887 Library shm_lib@master is cached. Copying from home. 00:00:06.903 [Pipeline] node 00:00:06.915 Running on WFP19 in /var/jenkins/workspace/crypto-phy-autotest 00:00:06.916 [Pipeline] { 00:00:06.926 [Pipeline] catchError 00:00:06.927 [Pipeline] { 00:00:06.937 [Pipeline] wrap 00:00:06.978 [Pipeline] { 00:00:06.996 [Pipeline] stage 00:00:06.999 [Pipeline] { (Prologue) 00:00:07.174 [Pipeline] sh 00:00:07.461 + logger -p user.info -t JENKINS-CI 00:00:07.478 [Pipeline] echo 00:00:07.479 Node: WFP19 00:00:07.487 [Pipeline] sh 00:00:07.780 [Pipeline] setCustomBuildProperty 00:00:07.791 [Pipeline] echo 00:00:07.792 Cleanup processes 00:00:07.796 [Pipeline] sh 00:00:08.075 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:08.075 637355 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:08.088 [Pipeline] sh 00:00:08.367 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:08.367 ++ grep -v 'sudo pgrep' 00:00:08.367 ++ awk '{print $1}' 00:00:08.367 + sudo kill -9 00:00:08.367 + true 00:00:08.379 [Pipeline] cleanWs 00:00:08.388 [WS-CLEANUP] Deleting project workspace... 00:00:08.388 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.395 [WS-CLEANUP] done 00:00:08.398 [Pipeline] setCustomBuildProperty 00:00:08.411 [Pipeline] sh 00:00:08.690 + sudo git config --global --replace-all safe.directory '*' 00:00:08.767 [Pipeline] httpRequest 00:00:08.796 [Pipeline] echo 00:00:08.797 Sorcerer 10.211.164.101 is alive 00:00:08.805 [Pipeline] httpRequest 00:00:08.809 HttpMethod: GET 00:00:08.809 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:08.810 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:08.831 Response Code: HTTP/1.1 200 OK 00:00:08.831 Success: Status code 200 is in the accepted range: 200,404 00:00:08.832 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:13.497 [Pipeline] sh 00:00:13.781 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:13.798 [Pipeline] httpRequest 00:00:13.820 [Pipeline] echo 00:00:13.822 Sorcerer 10.211.164.101 is alive 00:00:13.834 [Pipeline] httpRequest 00:00:13.839 HttpMethod: GET 00:00:13.840 URL: http://10.211.164.101/packages/spdk_325310f6aff355f72396906cdf192086b9ee6f44.tar.gz 00:00:13.840 Sending request to url: http://10.211.164.101/packages/spdk_325310f6aff355f72396906cdf192086b9ee6f44.tar.gz 00:00:13.857 Response Code: HTTP/1.1 200 OK 00:00:13.857 Success: Status code 200 is in the accepted range: 200,404 00:00:13.857 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_325310f6aff355f72396906cdf192086b9ee6f44.tar.gz 00:01:14.290 [Pipeline] sh 00:01:14.577 + tar --no-same-owner -xf spdk_325310f6aff355f72396906cdf192086b9ee6f44.tar.gz 00:01:17.881 [Pipeline] sh 00:01:18.167 + git -C spdk log --oneline -n5 00:01:18.167 325310f6a accel_perf: add support for DIX Generate/Verify 00:01:18.167 fcdc45f1b test/accel/dif: add DIX Generate/Verify suites 00:01:18.167 ae7704717 lib/accel: add DIX verify 00:01:18.167 8183d73cc lib/accel: add DIX generate 00:01:18.167 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:18.179 [Pipeline] } 00:01:18.197 [Pipeline] // stage 00:01:18.207 [Pipeline] stage 00:01:18.209 [Pipeline] { (Prepare) 00:01:18.229 [Pipeline] writeFile 00:01:18.247 [Pipeline] sh 00:01:18.539 + logger -p user.info -t JENKINS-CI 00:01:18.554 [Pipeline] sh 00:01:18.838 + logger -p user.info -t JENKINS-CI 00:01:18.851 [Pipeline] sh 00:01:19.166 + cat autorun-spdk.conf 00:01:19.166 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.166 SPDK_TEST_BLOCKDEV=1 00:01:19.166 SPDK_TEST_ISAL=1 00:01:19.166 SPDK_TEST_CRYPTO=1 00:01:19.166 SPDK_TEST_REDUCE=1 00:01:19.166 SPDK_TEST_VBDEV_COMPRESS=1 00:01:19.166 SPDK_RUN_UBSAN=1 00:01:19.166 SPDK_TEST_ACCEL=1 00:01:19.175 RUN_NIGHTLY=0 00:01:19.180 [Pipeline] readFile 00:01:19.208 [Pipeline] withEnv 00:01:19.210 [Pipeline] { 00:01:19.226 [Pipeline] sh 00:01:19.512 + set -ex 00:01:19.513 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:01:19.513 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:01:19.513 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.513 ++ SPDK_TEST_BLOCKDEV=1 00:01:19.513 ++ SPDK_TEST_ISAL=1 00:01:19.513 ++ SPDK_TEST_CRYPTO=1 00:01:19.513 ++ SPDK_TEST_REDUCE=1 00:01:19.513 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:01:19.513 ++ SPDK_RUN_UBSAN=1 00:01:19.513 ++ SPDK_TEST_ACCEL=1 00:01:19.513 ++ RUN_NIGHTLY=0 00:01:19.513 + case $SPDK_TEST_NVMF_NICS in 00:01:19.513 + DRIVERS= 00:01:19.513 + [[ -n '' ]] 00:01:19.513 + exit 0 00:01:19.522 [Pipeline] } 00:01:19.542 [Pipeline] // withEnv 00:01:19.548 [Pipeline] } 00:01:19.565 [Pipeline] // stage 00:01:19.576 [Pipeline] catchError 00:01:19.578 [Pipeline] { 00:01:19.591 [Pipeline] timeout 00:01:19.591 Timeout set to expire in 1 hr 0 min 00:01:19.592 [Pipeline] { 00:01:19.605 [Pipeline] stage 00:01:19.606 [Pipeline] { (Tests) 00:01:19.618 [Pipeline] sh 00:01:19.897 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:01:19.897 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:01:19.897 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:01:19.897 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:01:19.897 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:19.897 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:01:19.897 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:01:19.897 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:01:19.897 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:01:19.897 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:01:19.897 + [[ crypto-phy-autotest == pkgdep-* ]] 00:01:19.897 + cd /var/jenkins/workspace/crypto-phy-autotest 00:01:19.897 + source /etc/os-release 00:01:19.897 ++ NAME='Fedora Linux' 00:01:19.897 ++ VERSION='38 (Cloud Edition)' 00:01:19.897 ++ ID=fedora 00:01:19.897 ++ VERSION_ID=38 00:01:19.897 ++ VERSION_CODENAME= 00:01:19.897 ++ PLATFORM_ID=platform:f38 00:01:19.897 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:19.897 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:19.897 ++ LOGO=fedora-logo-icon 00:01:19.897 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:19.897 ++ HOME_URL=https://fedoraproject.org/ 00:01:19.897 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:19.897 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:19.897 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:19.897 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:19.897 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:19.897 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:19.897 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:19.897 ++ SUPPORT_END=2024-05-14 00:01:19.897 ++ VARIANT='Cloud Edition' 00:01:19.897 ++ VARIANT_ID=cloud 00:01:19.897 + uname -a 00:01:19.897 Linux spdk-wfp-19 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:19.897 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:01:24.093 Hugepages 00:01:24.093 node hugesize free / total 00:01:24.093 node0 1048576kB 0 / 0 00:01:24.093 node0 2048kB 0 / 0 00:01:24.093 node1 1048576kB 0 / 0 00:01:24.093 node1 2048kB 0 / 0 00:01:24.093 00:01:24.093 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.093 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:24.093 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:24.093 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:24.093 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:24.093 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:24.093 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:24.093 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:24.093 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:24.093 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:24.093 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:24.093 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:24.093 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:24.093 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:24.093 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:24.093 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:24.093 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:24.093 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:24.093 + rm -f /tmp/spdk-ld-path 00:01:24.093 + source autorun-spdk.conf 00:01:24.093 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.093 ++ SPDK_TEST_BLOCKDEV=1 00:01:24.093 ++ SPDK_TEST_ISAL=1 00:01:24.093 ++ SPDK_TEST_CRYPTO=1 00:01:24.093 ++ SPDK_TEST_REDUCE=1 00:01:24.093 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:01:24.093 ++ SPDK_RUN_UBSAN=1 00:01:24.093 ++ SPDK_TEST_ACCEL=1 00:01:24.093 ++ RUN_NIGHTLY=0 00:01:24.093 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:24.093 + [[ -n '' ]] 00:01:24.093 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:24.093 + for M in /var/spdk/build-*-manifest.txt 00:01:24.093 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.093 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:01:24.093 + for M in /var/spdk/build-*-manifest.txt 00:01:24.093 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.093 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:01:24.093 ++ uname 00:01:24.093 + [[ Linux == \L\i\n\u\x ]] 00:01:24.093 + sudo dmesg -T 00:01:24.093 + sudo dmesg --clear 00:01:24.093 + dmesg_pid=638435 00:01:24.093 + [[ Fedora Linux == FreeBSD ]] 00:01:24.093 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.093 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.093 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.093 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.093 + export FIO_BIN=/usr/src/fio-static/fio 00:01:24.093 + FIO_BIN=/usr/src/fio-static/fio 00:01:24.093 + sudo dmesg -Tw 00:01:24.093 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.093 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.093 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.093 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.093 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.093 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.093 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.093 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.093 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:01:24.093 Test configuration: 00:01:24.093 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.093 SPDK_TEST_BLOCKDEV=1 00:01:24.093 SPDK_TEST_ISAL=1 00:01:24.093 SPDK_TEST_CRYPTO=1 00:01:24.093 SPDK_TEST_REDUCE=1 00:01:24.093 SPDK_TEST_VBDEV_COMPRESS=1 00:01:24.093 SPDK_RUN_UBSAN=1 00:01:24.093 SPDK_TEST_ACCEL=1 00:01:24.093 RUN_NIGHTLY=0 13:01:34 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:01:24.093 13:01:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.093 13:01:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.093 13:01:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.093 13:01:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.093 13:01:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.093 13:01:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.093 13:01:34 -- paths/export.sh@5 -- $ export PATH 00:01:24.093 13:01:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.093 13:01:34 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:01:24.093 13:01:34 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:24.093 13:01:34 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721905294.XXXXXX 00:01:24.093 13:01:34 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721905294.pCpQCi 00:01:24.093 13:01:34 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:24.093 13:01:34 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:24.093 13:01:34 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:01:24.093 13:01:34 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:24.093 13:01:34 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.093 13:01:34 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:24.093 13:01:34 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:24.093 13:01:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.093 13:01:34 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:01:24.093 13:01:34 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:24.093 13:01:34 -- pm/common@17 -- $ local monitor 00:01:24.093 13:01:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.093 13:01:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.093 13:01:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.093 13:01:34 -- pm/common@21 -- $ date +%s 00:01:24.093 13:01:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.093 13:01:34 -- pm/common@21 -- $ date +%s 00:01:24.093 13:01:34 -- pm/common@25 -- $ sleep 1 00:01:24.093 13:01:34 -- pm/common@21 -- $ date +%s 00:01:24.093 13:01:34 -- pm/common@21 -- $ date +%s 00:01:24.093 13:01:34 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721905294 00:01:24.093 13:01:34 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721905294 00:01:24.093 13:01:34 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721905294 00:01:24.093 13:01:34 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721905294 00:01:24.093 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721905294_collect-vmstat.pm.log 00:01:24.093 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721905294_collect-cpu-load.pm.log 00:01:24.093 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721905294_collect-cpu-temp.pm.log 00:01:24.093 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721905294_collect-bmc-pm.bmc.pm.log 00:01:25.031 13:01:35 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:25.031 13:01:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.031 13:01:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.031 13:01:35 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:25.032 13:01:35 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.032 Thu Jul 25 11:01:35 AM UTC 2024 00:01:25.032 13:01:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.032 v24.09-pre-325-g325310f6a 00:01:25.032 13:01:35 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:25.032 13:01:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.032 13:01:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.032 13:01:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:25.032 13:01:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:25.032 13:01:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.032 ************************************ 00:01:25.032 START TEST ubsan 00:01:25.032 ************************************ 00:01:25.032 13:01:35 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:25.032 using ubsan 00:01:25.032 00:01:25.032 real 0m0.001s 00:01:25.032 user 0m0.000s 00:01:25.032 sys 0m0.000s 00:01:25.032 13:01:35 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:25.032 13:01:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.032 ************************************ 00:01:25.032 END TEST ubsan 00:01:25.032 ************************************ 00:01:25.032 13:01:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.032 13:01:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.032 13:01:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.032 13:01:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:25.032 13:01:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:25.032 13:01:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:25.032 13:01:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:25.032 13:01:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:25.032 13:01:35 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:25.291 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:01:25.291 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:25.551 Using 'verbs' RDMA provider 00:01:41.862 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:56.758 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:56.758 Creating mk/config.mk...done. 00:01:56.758 Creating mk/cc.flags.mk...done. 00:01:56.758 Type 'make' to build. 00:01:56.758 13:02:05 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:56.758 13:02:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:56.758 13:02:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:56.758 13:02:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.758 ************************************ 00:01:56.758 START TEST make 00:01:56.758 ************************************ 00:01:56.758 13:02:05 make -- common/autotest_common.sh@1125 -- $ make -j112 00:01:56.758 make[1]: Nothing to be done for 'all'. 00:02:28.856 The Meson build system 00:02:28.856 Version: 1.3.1 00:02:28.856 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:02:28.856 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:02:28.856 Build type: native build 00:02:28.856 Program cat found: YES (/usr/bin/cat) 00:02:28.856 Project name: DPDK 00:02:28.856 Project version: 24.03.0 00:02:28.856 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:28.856 C linker for the host machine: cc ld.bfd 2.39-16 00:02:28.856 Host machine cpu family: x86_64 00:02:28.856 Host machine cpu: x86_64 00:02:28.856 Message: ## Building in Developer Mode ## 00:02:28.856 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:28.856 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:28.856 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:28.856 Program python3 found: YES (/usr/bin/python3) 00:02:28.856 Program cat found: YES (/usr/bin/cat) 00:02:28.856 Compiler for C supports arguments -march=native: YES 00:02:28.856 Checking for size of "void *" : 8 00:02:28.856 Checking for size of "void *" : 8 (cached) 00:02:28.856 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:28.856 Library m found: YES 00:02:28.856 Library numa found: YES 00:02:28.856 Has header "numaif.h" : YES 00:02:28.856 Library fdt found: NO 00:02:28.856 Library execinfo found: NO 00:02:28.856 Has header "execinfo.h" : YES 00:02:28.856 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:28.856 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:28.856 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:28.856 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:28.856 Run-time dependency openssl found: YES 3.0.9 00:02:28.856 Run-time dependency libpcap found: YES 1.10.4 00:02:28.856 Has header "pcap.h" with dependency libpcap: YES 00:02:28.856 Compiler for C supports arguments -Wcast-qual: YES 00:02:28.856 Compiler for C supports arguments -Wdeprecated: YES 00:02:28.856 Compiler for C supports arguments -Wformat: YES 00:02:28.856 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:28.856 Compiler for C supports arguments -Wformat-security: NO 00:02:28.856 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:28.856 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:28.856 Compiler for C supports arguments -Wnested-externs: YES 00:02:28.856 Compiler for C supports arguments -Wold-style-definition: YES 00:02:28.856 Compiler for C supports arguments -Wpointer-arith: YES 00:02:28.856 Compiler for C supports arguments -Wsign-compare: YES 00:02:28.856 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:28.856 Compiler for C supports arguments -Wundef: YES 00:02:28.856 Compiler for C supports arguments -Wwrite-strings: YES 00:02:28.856 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:28.856 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:28.856 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:28.856 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:28.856 Program objdump found: YES (/usr/bin/objdump) 00:02:28.856 Compiler for C supports arguments -mavx512f: YES 00:02:28.856 Checking if "AVX512 checking" compiles: YES 00:02:28.856 Fetching value of define "__SSE4_2__" : 1 00:02:28.856 Fetching value of define "__AES__" : 1 00:02:28.856 Fetching value of define "__AVX__" : 1 00:02:28.856 Fetching value of define "__AVX2__" : 1 00:02:28.856 Fetching value of define "__AVX512BW__" : 1 00:02:28.856 Fetching value of define "__AVX512CD__" : 1 00:02:28.856 Fetching value of define "__AVX512DQ__" : 1 00:02:28.856 Fetching value of define "__AVX512F__" : 1 00:02:28.856 Fetching value of define "__AVX512VL__" : 1 00:02:28.856 Fetching value of define "__PCLMUL__" : 1 00:02:28.856 Fetching value of define "__RDRND__" : 1 00:02:28.856 Fetching value of define "__RDSEED__" : 1 00:02:28.856 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:28.856 Fetching value of define "__znver1__" : (undefined) 00:02:28.856 Fetching value of define "__znver2__" : (undefined) 00:02:28.856 Fetching value of define "__znver3__" : (undefined) 00:02:28.856 Fetching value of define "__znver4__" : (undefined) 00:02:28.856 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:28.856 Message: lib/log: Defining dependency "log" 00:02:28.856 Message: lib/kvargs: Defining dependency "kvargs" 00:02:28.856 Message: lib/telemetry: Defining dependency "telemetry" 00:02:28.856 Checking for function "getentropy" : NO 00:02:28.856 Message: lib/eal: Defining dependency "eal" 00:02:28.856 Message: lib/ring: Defining dependency "ring" 00:02:28.856 Message: lib/rcu: Defining dependency "rcu" 00:02:28.856 Message: lib/mempool: Defining dependency "mempool" 00:02:28.856 Message: lib/mbuf: Defining dependency "mbuf" 00:02:28.856 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:28.856 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:28.856 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:28.856 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:28.856 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:28.856 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:28.856 Compiler for C supports arguments -mpclmul: YES 00:02:28.856 Compiler for C supports arguments -maes: YES 00:02:28.856 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:28.856 Compiler for C supports arguments -mavx512bw: YES 00:02:28.856 Compiler for C supports arguments -mavx512dq: YES 00:02:28.856 Compiler for C supports arguments -mavx512vl: YES 00:02:28.856 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:28.856 Compiler for C supports arguments -mavx2: YES 00:02:28.856 Compiler for C supports arguments -mavx: YES 00:02:28.856 Message: lib/net: Defining dependency "net" 00:02:28.856 Message: lib/meter: Defining dependency "meter" 00:02:28.856 Message: lib/ethdev: Defining dependency "ethdev" 00:02:28.856 Message: lib/pci: Defining dependency "pci" 00:02:28.856 Message: lib/cmdline: Defining dependency "cmdline" 00:02:28.856 Message: lib/hash: Defining dependency "hash" 00:02:28.856 Message: lib/timer: Defining dependency "timer" 00:02:28.856 Message: lib/compressdev: Defining dependency "compressdev" 00:02:28.856 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:28.856 Message: lib/dmadev: Defining dependency "dmadev" 00:02:28.856 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:28.856 Message: lib/power: Defining dependency "power" 00:02:28.856 Message: lib/reorder: Defining dependency "reorder" 00:02:28.856 Message: lib/security: Defining dependency "security" 00:02:28.856 Has header "linux/userfaultfd.h" : YES 00:02:28.856 Has header "linux/vduse.h" : YES 00:02:28.856 Message: lib/vhost: Defining dependency "vhost" 00:02:28.856 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:28.856 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:02:28.856 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:28.856 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:28.856 Compiler for C supports arguments -std=c11: YES 00:02:28.856 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:02:28.856 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:02:28.856 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:02:28.856 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:02:28.856 Run-time dependency libmlx5 found: YES 1.24.44.0 00:02:28.856 Run-time dependency libibverbs found: YES 1.14.44.0 00:02:28.857 Library mtcr_ul found: NO 00:02:28.857 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:02:28.857 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:02:28.857 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:02:28.857 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:02:28.857 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:02:28.857 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:02:28.857 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:02:28.857 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:02:28.857 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:02:28.857 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:02:28.857 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:02:28.857 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:02:28.857 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:02:28.857 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:02:28.857 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:02:34.133 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:02:34.133 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:02:34.134 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:02:34.134 Configuring mlx5_autoconf.h using configuration 00:02:34.134 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:02:34.134 Run-time dependency libcrypto found: YES 3.0.9 00:02:34.134 Library IPSec_MB found: YES 00:02:34.134 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:02:34.134 Message: drivers/common/qat: Defining dependency "common_qat" 00:02:34.134 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.134 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:34.134 Library IPSec_MB found: YES 00:02:34.134 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:02:34.134 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:02:34.134 Compiler for C supports arguments -std=c11: YES (cached) 00:02:34.134 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:02:34.134 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:02:34.134 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:02:34.134 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:02:34.134 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:02:34.134 Run-time dependency libisal found: NO (tried pkgconfig) 00:02:34.134 Library libisal found: NO 00:02:34.134 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:02:34.134 Compiler for C supports arguments -std=c11: YES (cached) 00:02:34.134 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:02:34.134 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:02:34.134 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:02:34.134 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:02:34.134 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:02:34.134 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:34.134 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:34.134 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:34.134 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:34.134 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:34.134 Program doxygen found: YES (/usr/bin/doxygen) 00:02:34.134 Configuring doxy-api-html.conf using configuration 00:02:34.134 Configuring doxy-api-man.conf using configuration 00:02:34.134 Program mandb found: YES (/usr/bin/mandb) 00:02:34.134 Program sphinx-build found: NO 00:02:34.134 Configuring rte_build_config.h using configuration 00:02:34.134 Message: 00:02:34.134 ================= 00:02:34.134 Applications Enabled 00:02:34.134 ================= 00:02:34.134 00:02:34.134 apps: 00:02:34.134 00:02:34.134 00:02:34.134 Message: 00:02:34.134 ================= 00:02:34.134 Libraries Enabled 00:02:34.134 ================= 00:02:34.134 00:02:34.134 libs: 00:02:34.134 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.134 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:34.134 cryptodev, dmadev, power, reorder, security, vhost, 00:02:34.134 00:02:34.134 Message: 00:02:34.134 =============== 00:02:34.134 Drivers Enabled 00:02:34.134 =============== 00:02:34.134 00:02:34.134 common: 00:02:34.134 mlx5, qat, 00:02:34.134 bus: 00:02:34.134 auxiliary, pci, vdev, 00:02:34.134 mempool: 00:02:34.134 ring, 00:02:34.134 dma: 00:02:34.134 00:02:34.134 net: 00:02:34.134 00:02:34.134 crypto: 00:02:34.134 ipsec_mb, mlx5, 00:02:34.134 compress: 00:02:34.134 isal, mlx5, 00:02:34.134 vdpa: 00:02:34.134 00:02:34.134 00:02:34.134 Message: 00:02:34.134 ================= 00:02:34.134 Content Skipped 00:02:34.134 ================= 00:02:34.134 00:02:34.134 apps: 00:02:34.134 dumpcap: explicitly disabled via build config 00:02:34.134 graph: explicitly disabled via build config 00:02:34.134 pdump: explicitly disabled via build config 00:02:34.134 proc-info: explicitly disabled via build config 00:02:34.134 test-acl: explicitly disabled via build config 00:02:34.134 test-bbdev: explicitly disabled via build config 00:02:34.134 test-cmdline: explicitly disabled via build config 00:02:34.134 test-compress-perf: explicitly disabled via build config 00:02:34.134 test-crypto-perf: explicitly disabled via build config 00:02:34.134 test-dma-perf: explicitly disabled via build config 00:02:34.134 test-eventdev: explicitly disabled via build config 00:02:34.134 test-fib: explicitly disabled via build config 00:02:34.134 test-flow-perf: explicitly disabled via build config 00:02:34.134 test-gpudev: explicitly disabled via build config 00:02:34.134 test-mldev: explicitly disabled via build config 00:02:34.134 test-pipeline: explicitly disabled via build config 00:02:34.134 test-pmd: explicitly disabled via build config 00:02:34.134 test-regex: explicitly disabled via build config 00:02:34.134 test-sad: explicitly disabled via build config 00:02:34.134 test-security-perf: explicitly disabled via build config 00:02:34.134 00:02:34.134 libs: 00:02:34.134 argparse: explicitly disabled via build config 00:02:34.134 metrics: explicitly disabled via build config 00:02:34.134 acl: explicitly disabled via build config 00:02:34.134 bbdev: explicitly disabled via build config 00:02:34.134 bitratestats: explicitly disabled via build config 00:02:34.134 bpf: explicitly disabled via build config 00:02:34.134 cfgfile: explicitly disabled via build config 00:02:34.134 distributor: explicitly disabled via build config 00:02:34.134 efd: explicitly disabled via build config 00:02:34.134 eventdev: explicitly disabled via build config 00:02:34.134 dispatcher: explicitly disabled via build config 00:02:34.134 gpudev: explicitly disabled via build config 00:02:34.134 gro: explicitly disabled via build config 00:02:34.134 gso: explicitly disabled via build config 00:02:34.134 ip_frag: explicitly disabled via build config 00:02:34.134 jobstats: explicitly disabled via build config 00:02:34.134 latencystats: explicitly disabled via build config 00:02:34.134 lpm: explicitly disabled via build config 00:02:34.134 member: explicitly disabled via build config 00:02:34.134 pcapng: explicitly disabled via build config 00:02:34.134 rawdev: explicitly disabled via build config 00:02:34.134 regexdev: explicitly disabled via build config 00:02:34.134 mldev: explicitly disabled via build config 00:02:34.134 rib: explicitly disabled via build config 00:02:34.134 sched: explicitly disabled via build config 00:02:34.134 stack: explicitly disabled via build config 00:02:34.134 ipsec: explicitly disabled via build config 00:02:34.134 pdcp: explicitly disabled via build config 00:02:34.134 fib: explicitly disabled via build config 00:02:34.134 port: explicitly disabled via build config 00:02:34.134 pdump: explicitly disabled via build config 00:02:34.134 table: explicitly disabled via build config 00:02:34.134 pipeline: explicitly disabled via build config 00:02:34.134 graph: explicitly disabled via build config 00:02:34.134 node: explicitly disabled via build config 00:02:34.134 00:02:34.134 drivers: 00:02:34.134 common/cpt: not in enabled drivers build config 00:02:34.134 common/dpaax: not in enabled drivers build config 00:02:34.134 common/iavf: not in enabled drivers build config 00:02:34.134 common/idpf: not in enabled drivers build config 00:02:34.134 common/ionic: not in enabled drivers build config 00:02:34.134 common/mvep: not in enabled drivers build config 00:02:34.134 common/octeontx: not in enabled drivers build config 00:02:34.134 bus/cdx: not in enabled drivers build config 00:02:34.134 bus/dpaa: not in enabled drivers build config 00:02:34.134 bus/fslmc: not in enabled drivers build config 00:02:34.134 bus/ifpga: not in enabled drivers build config 00:02:34.134 bus/platform: not in enabled drivers build config 00:02:34.134 bus/uacce: not in enabled drivers build config 00:02:34.134 bus/vmbus: not in enabled drivers build config 00:02:34.134 common/cnxk: not in enabled drivers build config 00:02:34.134 common/nfp: not in enabled drivers build config 00:02:34.134 common/nitrox: not in enabled drivers build config 00:02:34.134 common/sfc_efx: not in enabled drivers build config 00:02:34.134 mempool/bucket: not in enabled drivers build config 00:02:34.134 mempool/cnxk: not in enabled drivers build config 00:02:34.134 mempool/dpaa: not in enabled drivers build config 00:02:34.135 mempool/dpaa2: not in enabled drivers build config 00:02:34.135 mempool/octeontx: not in enabled drivers build config 00:02:34.135 mempool/stack: not in enabled drivers build config 00:02:34.135 dma/cnxk: not in enabled drivers build config 00:02:34.135 dma/dpaa: not in enabled drivers build config 00:02:34.135 dma/dpaa2: not in enabled drivers build config 00:02:34.135 dma/hisilicon: not in enabled drivers build config 00:02:34.135 dma/idxd: not in enabled drivers build config 00:02:34.135 dma/ioat: not in enabled drivers build config 00:02:34.135 dma/skeleton: not in enabled drivers build config 00:02:34.135 net/af_packet: not in enabled drivers build config 00:02:34.135 net/af_xdp: not in enabled drivers build config 00:02:34.135 net/ark: not in enabled drivers build config 00:02:34.135 net/atlantic: not in enabled drivers build config 00:02:34.135 net/avp: not in enabled drivers build config 00:02:34.135 net/axgbe: not in enabled drivers build config 00:02:34.135 net/bnx2x: not in enabled drivers build config 00:02:34.135 net/bnxt: not in enabled drivers build config 00:02:34.135 net/bonding: not in enabled drivers build config 00:02:34.135 net/cnxk: not in enabled drivers build config 00:02:34.135 net/cpfl: not in enabled drivers build config 00:02:34.135 net/cxgbe: not in enabled drivers build config 00:02:34.135 net/dpaa: not in enabled drivers build config 00:02:34.135 net/dpaa2: not in enabled drivers build config 00:02:34.135 net/e1000: not in enabled drivers build config 00:02:34.135 net/ena: not in enabled drivers build config 00:02:34.135 net/enetc: not in enabled drivers build config 00:02:34.135 net/enetfec: not in enabled drivers build config 00:02:34.135 net/enic: not in enabled drivers build config 00:02:34.135 net/failsafe: not in enabled drivers build config 00:02:34.135 net/fm10k: not in enabled drivers build config 00:02:34.135 net/gve: not in enabled drivers build config 00:02:34.135 net/hinic: not in enabled drivers build config 00:02:34.135 net/hns3: not in enabled drivers build config 00:02:34.135 net/i40e: not in enabled drivers build config 00:02:34.135 net/iavf: not in enabled drivers build config 00:02:34.135 net/ice: not in enabled drivers build config 00:02:34.135 net/idpf: not in enabled drivers build config 00:02:34.135 net/igc: not in enabled drivers build config 00:02:34.135 net/ionic: not in enabled drivers build config 00:02:34.135 net/ipn3ke: not in enabled drivers build config 00:02:34.135 net/ixgbe: not in enabled drivers build config 00:02:34.135 net/mana: not in enabled drivers build config 00:02:34.135 net/memif: not in enabled drivers build config 00:02:34.135 net/mlx4: not in enabled drivers build config 00:02:34.135 net/mlx5: not in enabled drivers build config 00:02:34.135 net/mvneta: not in enabled drivers build config 00:02:34.135 net/mvpp2: not in enabled drivers build config 00:02:34.135 net/netvsc: not in enabled drivers build config 00:02:34.135 net/nfb: not in enabled drivers build config 00:02:34.135 net/nfp: not in enabled drivers build config 00:02:34.135 net/ngbe: not in enabled drivers build config 00:02:34.135 net/null: not in enabled drivers build config 00:02:34.135 net/octeontx: not in enabled drivers build config 00:02:34.135 net/octeon_ep: not in enabled drivers build config 00:02:34.135 net/pcap: not in enabled drivers build config 00:02:34.135 net/pfe: not in enabled drivers build config 00:02:34.135 net/qede: not in enabled drivers build config 00:02:34.135 net/ring: not in enabled drivers build config 00:02:34.135 net/sfc: not in enabled drivers build config 00:02:34.135 net/softnic: not in enabled drivers build config 00:02:34.135 net/tap: not in enabled drivers build config 00:02:34.135 net/thunderx: not in enabled drivers build config 00:02:34.135 net/txgbe: not in enabled drivers build config 00:02:34.135 net/vdev_netvsc: not in enabled drivers build config 00:02:34.135 net/vhost: not in enabled drivers build config 00:02:34.135 net/virtio: not in enabled drivers build config 00:02:34.135 net/vmxnet3: not in enabled drivers build config 00:02:34.135 raw/*: missing internal dependency, "rawdev" 00:02:34.135 crypto/armv8: not in enabled drivers build config 00:02:34.135 crypto/bcmfs: not in enabled drivers build config 00:02:34.135 crypto/caam_jr: not in enabled drivers build config 00:02:34.135 crypto/ccp: not in enabled drivers build config 00:02:34.135 crypto/cnxk: not in enabled drivers build config 00:02:34.135 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.135 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.135 crypto/mvsam: not in enabled drivers build config 00:02:34.135 crypto/nitrox: not in enabled drivers build config 00:02:34.135 crypto/null: not in enabled drivers build config 00:02:34.135 crypto/octeontx: not in enabled drivers build config 00:02:34.135 crypto/openssl: not in enabled drivers build config 00:02:34.135 crypto/scheduler: not in enabled drivers build config 00:02:34.135 crypto/uadk: not in enabled drivers build config 00:02:34.135 crypto/virtio: not in enabled drivers build config 00:02:34.135 compress/nitrox: not in enabled drivers build config 00:02:34.135 compress/octeontx: not in enabled drivers build config 00:02:34.135 compress/zlib: not in enabled drivers build config 00:02:34.135 regex/*: missing internal dependency, "regexdev" 00:02:34.135 ml/*: missing internal dependency, "mldev" 00:02:34.135 vdpa/ifc: not in enabled drivers build config 00:02:34.135 vdpa/mlx5: not in enabled drivers build config 00:02:34.135 vdpa/nfp: not in enabled drivers build config 00:02:34.135 vdpa/sfc: not in enabled drivers build config 00:02:34.135 event/*: missing internal dependency, "eventdev" 00:02:34.135 baseband/*: missing internal dependency, "bbdev" 00:02:34.135 gpu/*: missing internal dependency, "gpudev" 00:02:34.135 00:02:34.135 00:02:34.135 Build targets in project: 115 00:02:34.135 00:02:34.135 DPDK 24.03.0 00:02:34.135 00:02:34.135 User defined options 00:02:34.135 buildtype : debug 00:02:34.135 default_library : shared 00:02:34.135 libdir : lib 00:02:34.135 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:02:34.135 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:02:34.135 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:02:34.135 cpu_instruction_set: native 00:02:34.135 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:34.135 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:34.135 enable_docs : false 00:02:34.135 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:02:34.135 enable_kmods : false 00:02:34.135 max_lcores : 128 00:02:34.135 tests : false 00:02:34.135 00:02:34.135 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.397 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:02:34.667 [1/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.667 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:34.667 [3/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:34.667 [4/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:34.667 [5/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.667 [6/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.667 [7/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:34.667 [8/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:34.667 [9/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:34.667 [10/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.667 [11/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:34.667 [12/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:34.667 [13/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:34.667 [14/378] Linking static target lib/librte_kvargs.a 00:02:34.667 [15/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:34.667 [16/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:34.667 [17/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:34.930 [18/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:34.930 [19/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:34.930 [20/378] Linking static target lib/librte_log.a 00:02:34.930 [21/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:34.930 [22/378] Linking static target lib/librte_pci.a 00:02:34.930 [23/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:34.930 [24/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.930 [25/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:34.930 [26/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.930 [27/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:34.930 [28/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.930 [29/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:34.930 [30/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:34.930 [31/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.931 [32/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:35.190 [33/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:35.190 [34/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:35.190 [35/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:35.190 [36/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.190 [37/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:35.190 [38/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:35.190 [39/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.190 [40/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.190 [41/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:35.190 [42/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:35.190 [43/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:35.190 [44/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:35.190 [45/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.190 [46/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:35.190 [47/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:35.190 [48/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:35.190 [49/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:35.451 [50/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.451 [51/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.451 [52/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.452 [53/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:35.452 [54/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:35.452 [55/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.452 [56/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:35.452 [57/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:35.452 [58/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:35.452 [59/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:35.452 [60/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:35.452 [61/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:35.452 [62/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:35.452 [63/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:35.452 [64/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:35.452 [65/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:35.452 [66/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:35.452 [67/378] Linking static target lib/librte_telemetry.a 00:02:35.452 [68/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:35.452 [69/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:35.452 [70/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.452 [71/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:35.452 [72/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:35.452 [73/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.452 [74/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.452 [75/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:35.452 [76/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:35.452 [77/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:35.452 [78/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:35.452 [79/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:35.452 [80/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:35.452 [81/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.452 [82/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:35.452 [83/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.452 [84/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:35.452 [85/378] Linking static target lib/librte_meter.a 00:02:35.452 [86/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:35.452 [87/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:35.452 [88/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:35.452 [89/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:35.452 [90/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:35.452 [91/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:35.452 [92/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:35.452 [93/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:35.452 [94/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:35.452 [95/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:35.452 [96/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:35.452 [97/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:35.452 [98/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.452 [99/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:35.452 [100/378] Linking static target lib/librte_ring.a 00:02:35.452 [101/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:35.452 [102/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:35.452 [103/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:35.452 [104/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:35.452 [105/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:35.452 [106/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:35.452 [107/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:35.452 [108/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:35.452 [109/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:35.452 [110/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:35.452 [111/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:35.452 [112/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:02:35.452 [113/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:35.452 [114/378] Linking static target lib/librte_cmdline.a 00:02:35.714 [115/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:35.714 [116/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:35.714 [117/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:35.714 [118/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.714 [119/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:02:35.714 [120/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:35.714 [121/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:35.714 [122/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:35.714 [123/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:35.714 [124/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:35.714 [125/378] Linking static target lib/librte_timer.a 00:02:35.714 [126/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:35.714 [127/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:35.714 [128/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:35.714 [129/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:35.714 [130/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:35.714 [131/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:35.714 [132/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.714 [133/378] Linking static target lib/librte_mempool.a 00:02:35.714 [134/378] Linking static target lib/librte_rcu.a 00:02:35.714 [135/378] Linking static target lib/librte_net.a 00:02:35.714 [136/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:35.714 [137/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:35.714 [138/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:35.714 [139/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:35.714 [140/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:35.714 [141/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.714 [142/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:35.714 [143/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:35.714 [144/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:35.714 [145/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:35.714 [146/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:35.714 [147/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:35.714 [148/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:35.714 [149/378] Linking static target lib/librte_dmadev.a 00:02:35.714 [150/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:35.714 [151/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:36.008 [152/378] Linking static target lib/librte_compressdev.a 00:02:36.008 [153/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:36.008 [154/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:36.008 [155/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:36.008 [156/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:02:36.008 [157/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.008 [158/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:36.008 [159/378] Linking static target lib/librte_mbuf.a 00:02:36.008 [160/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:36.008 [161/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.008 [162/378] Linking target lib/librte_log.so.24.1 00:02:36.008 [163/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:36.008 [164/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:36.008 [165/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.008 [166/378] Linking static target lib/librte_power.a 00:02:36.267 [167/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:36.267 [168/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:36.267 [169/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:36.267 [170/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.267 [171/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:02:36.267 [172/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.267 [173/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:02:36.267 [174/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:36.267 [175/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:02:36.267 [176/378] Linking static target lib/librte_eal.a 00:02:36.267 [177/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:36.267 [178/378] Linking static target lib/librte_cryptodev.a 00:02:36.267 [179/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:36.267 [180/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.267 [181/378] Linking static target lib/librte_hash.a 00:02:36.267 [182/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:36.267 [183/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:36.267 [184/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:02:36.267 [185/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:02:36.267 [186/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:36.267 [187/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:36.267 [188/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:36.267 [189/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:36.267 [190/378] Linking static target lib/librte_reorder.a 00:02:36.267 [191/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:02:36.267 [192/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:36.267 [193/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.267 [194/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:36.267 [195/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:36.267 [196/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:02:36.267 [197/378] Linking target lib/librte_kvargs.so.24.1 00:02:36.267 [198/378] Linking static target lib/librte_security.a 00:02:36.267 [199/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:36.267 [200/378] Linking target lib/librte_telemetry.so.24.1 00:02:36.267 [201/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:02:36.267 [202/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:02:36.267 [203/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:36.267 [204/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:02:36.267 [205/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:02:36.267 [206/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:36.267 [207/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:02:36.267 [208/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:02:36.267 [209/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:02:36.267 [210/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:02:36.267 [211/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:02:36.267 [212/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:36.267 [213/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:02:36.267 [214/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:02:36.267 [215/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:02:36.267 [216/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:02:36.525 [217/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:02:36.525 [218/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:02:36.525 [219/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:02:36.525 [220/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:02:36.525 [221/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:02:36.525 [222/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:36.525 [223/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:36.525 [224/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:02:36.525 [225/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:02:36.525 [226/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:02:36.525 [227/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:02:36.525 [228/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:02:36.525 [229/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:02:36.525 [230/378] Linking static target drivers/librte_bus_auxiliary.a 00:02:36.525 [231/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:36.525 [232/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.525 [233/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.525 [234/378] Linking static target drivers/librte_bus_vdev.a 00:02:36.525 [235/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:02:36.525 [236/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:02:36.525 [237/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:02:36.525 [238/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:02:36.525 [239/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.525 [240/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:36.525 [241/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:02:36.525 [242/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:02:36.525 [243/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:02:36.525 [244/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:02:36.525 [245/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:36.525 [246/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:36.525 [247/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.525 [248/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:36.525 [249/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:02:36.525 [250/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:02:36.525 [251/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:02:36.525 [252/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.525 [253/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.525 [254/378] Linking static target drivers/librte_bus_pci.a 00:02:36.525 [255/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:36.525 [256/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:02:36.525 [257/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:02:36.525 [258/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:02:36.525 [259/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:02:36.525 [260/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:02:36.525 [261/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.525 [262/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:02:36.525 [263/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:02:36.783 [264/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:02:36.783 [265/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:02:36.783 [266/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:02:36.783 [267/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.783 [268/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:36.783 [269/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:36.783 [270/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:02:36.783 [271/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:02:36.783 [272/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:02:36.783 [273/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:36.783 [274/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:02:36.783 [275/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:02:36.783 [276/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.783 [277/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.783 [278/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.783 [279/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.783 [280/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:02:36.783 [281/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:02:36.783 [282/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:02:36.783 [283/378] Linking static target drivers/librte_mempool_ring.a 00:02:36.783 [284/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:02:36.783 [285/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:02:36.783 [286/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.783 [287/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:02:36.783 [288/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:02:36.783 [289/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.783 [290/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.783 [291/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:02:36.783 [292/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:02:37.040 [293/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:37.040 [294/378] Linking static target drivers/librte_compress_mlx5.a 00:02:37.040 [295/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:02:37.040 [296/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:02:37.041 [297/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:02:37.041 [298/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:02:37.041 [299/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:02:37.041 [300/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:02:37.041 [301/378] Linking static target drivers/librte_compress_isal.a 00:02:37.041 [302/378] Linking static target lib/librte_ethdev.a 00:02:37.041 [303/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:02:37.041 [304/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:02:37.041 [305/378] Linking static target drivers/librte_crypto_mlx5.a 00:02:37.041 [306/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.041 [307/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:02:37.041 [308/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:02:37.041 [309/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:02:37.041 [310/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:02:37.041 [311/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:02:37.298 [312/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:02:37.298 [313/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:02:37.298 [314/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.298 [315/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:37.298 [316/378] Linking static target drivers/librte_common_mlx5.a 00:02:37.556 [317/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.556 [318/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:02:37.556 [319/378] Linking static target drivers/libtmp_rte_common_qat.a 00:02:37.814 [320/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:02:37.814 [321/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:37.814 [322/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:38.071 [323/378] Linking static target drivers/librte_common_qat.a 00:02:38.328 [324/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.328 [325/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:38.328 [326/378] Linking static target lib/librte_vhost.a 00:02:40.860 [327/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.394 [328/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.577 [329/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.950 [330/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.950 [331/378] Linking target lib/librte_eal.so.24.1 00:02:49.208 [332/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:49.208 [333/378] Linking target lib/librte_timer.so.24.1 00:02:49.208 [334/378] Linking target lib/librte_ring.so.24.1 00:02:49.208 [335/378] Linking target lib/librte_meter.so.24.1 00:02:49.208 [336/378] Linking target lib/librte_pci.so.24.1 00:02:49.208 [337/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:02:49.208 [338/378] Linking target lib/librte_dmadev.so.24.1 00:02:49.208 [339/378] Linking target drivers/librte_bus_vdev.so.24.1 00:02:49.466 [340/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:49.466 [341/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:02:49.466 [342/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:49.466 [343/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:49.466 [344/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:49.466 [345/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:49.466 [346/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:02:49.466 [347/378] Linking target lib/librte_rcu.so.24.1 00:02:49.466 [348/378] Linking target lib/librte_mempool.so.24.1 00:02:49.466 [349/378] Linking target drivers/librte_bus_pci.so.24.1 00:02:49.724 [350/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:49.724 [351/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:02:49.724 [352/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:49.724 [353/378] Linking target drivers/librte_mempool_ring.so.24.1 00:02:49.724 [354/378] Linking target lib/librte_mbuf.so.24.1 00:02:49.724 [355/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:49.982 [356/378] Linking target lib/librte_compressdev.so.24.1 00:02:49.982 [357/378] Linking target lib/librte_reorder.so.24.1 00:02:49.982 [358/378] Linking target lib/librte_net.so.24.1 00:02:49.982 [359/378] Linking target lib/librte_cryptodev.so.24.1 00:02:49.982 [360/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:49.982 [361/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:02:49.982 [362/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:49.982 [363/378] Linking target lib/librte_hash.so.24.1 00:02:49.982 [364/378] Linking target lib/librte_cmdline.so.24.1 00:02:49.982 [365/378] Linking target lib/librte_security.so.24.1 00:02:50.240 [366/378] Linking target drivers/librte_compress_isal.so.24.1 00:02:50.240 [367/378] Linking target lib/librte_ethdev.so.24.1 00:02:50.240 [368/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:50.240 [369/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:02:50.240 [370/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:50.240 [371/378] Linking target drivers/librte_common_mlx5.so.24.1 00:02:50.240 [372/378] Linking target lib/librte_power.so.24.1 00:02:50.240 [373/378] Linking target lib/librte_vhost.so.24.1 00:02:50.499 [374/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:02:50.499 [375/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:02:50.499 [376/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:02:50.499 [377/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:02:50.499 [378/378] Linking target drivers/librte_common_qat.so.24.1 00:02:50.499 INFO: autodetecting backend as ninja 00:02:50.499 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:51.875 CC lib/ut/ut.o 00:02:51.875 CC lib/ut_mock/mock.o 00:02:51.875 CC lib/log/log.o 00:02:51.875 CC lib/log/log_flags.o 00:02:51.875 CC lib/log/log_deprecated.o 00:02:52.133 LIB libspdk_ut.a 00:02:52.133 SO libspdk_ut.so.2.0 00:02:52.133 LIB libspdk_ut_mock.a 00:02:52.133 LIB libspdk_log.a 00:02:52.133 SO libspdk_ut_mock.so.6.0 00:02:52.133 SO libspdk_log.so.7.0 00:02:52.133 SYMLINK libspdk_ut.so 00:02:52.133 SYMLINK libspdk_ut_mock.so 00:02:52.133 SYMLINK libspdk_log.so 00:02:52.703 CC lib/ioat/ioat.o 00:02:52.703 CC lib/dma/dma.o 00:02:52.703 CXX lib/trace_parser/trace.o 00:02:52.703 CC lib/util/base64.o 00:02:52.703 CC lib/util/bit_array.o 00:02:52.703 CC lib/util/cpuset.o 00:02:52.703 CC lib/util/crc16.o 00:02:52.703 CC lib/util/crc32.o 00:02:52.703 CC lib/util/crc32c.o 00:02:52.703 CC lib/util/crc32_ieee.o 00:02:52.703 CC lib/util/crc64.o 00:02:52.703 CC lib/util/dif.o 00:02:52.703 CC lib/util/fd.o 00:02:52.703 CC lib/util/fd_group.o 00:02:52.703 CC lib/util/file.o 00:02:52.703 CC lib/util/hexlify.o 00:02:52.703 CC lib/util/iov.o 00:02:52.703 CC lib/util/math.o 00:02:52.703 CC lib/util/net.o 00:02:52.703 CC lib/util/pipe.o 00:02:52.703 CC lib/util/strerror_tls.o 00:02:52.703 CC lib/util/string.o 00:02:52.703 CC lib/util/uuid.o 00:02:52.703 CC lib/util/xor.o 00:02:52.703 CC lib/util/zipf.o 00:02:52.703 CC lib/vfio_user/host/vfio_user_pci.o 00:02:52.703 CC lib/vfio_user/host/vfio_user.o 00:02:52.703 LIB libspdk_dma.a 00:02:52.703 SO libspdk_dma.so.4.0 00:02:52.961 LIB libspdk_ioat.a 00:02:52.961 SYMLINK libspdk_dma.so 00:02:52.961 SO libspdk_ioat.so.7.0 00:02:52.961 SYMLINK libspdk_ioat.so 00:02:52.961 LIB libspdk_vfio_user.a 00:02:52.961 SO libspdk_vfio_user.so.5.0 00:02:53.218 LIB libspdk_util.a 00:02:53.218 SYMLINK libspdk_vfio_user.so 00:02:53.218 SO libspdk_util.so.10.0 00:02:53.218 SYMLINK libspdk_util.so 00:02:53.488 LIB libspdk_trace_parser.a 00:02:53.488 SO libspdk_trace_parser.so.5.0 00:02:53.488 SYMLINK libspdk_trace_parser.so 00:02:53.764 CC lib/rdma_utils/rdma_utils.o 00:02:53.764 CC lib/reduce/reduce.o 00:02:53.764 CC lib/rdma_provider/common.o 00:02:53.764 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:53.764 CC lib/idxd/idxd.o 00:02:53.764 CC lib/idxd/idxd_user.o 00:02:53.764 CC lib/idxd/idxd_kernel.o 00:02:53.764 CC lib/json/json_parse.o 00:02:53.764 CC lib/json/json_util.o 00:02:53.764 CC lib/json/json_write.o 00:02:53.764 CC lib/conf/conf.o 00:02:53.764 CC lib/vmd/vmd.o 00:02:53.764 CC lib/vmd/led.o 00:02:53.764 CC lib/env_dpdk/env.o 00:02:53.764 CC lib/env_dpdk/memory.o 00:02:53.764 CC lib/env_dpdk/pci.o 00:02:53.764 CC lib/env_dpdk/pci_ioat.o 00:02:53.764 CC lib/env_dpdk/init.o 00:02:53.764 CC lib/env_dpdk/threads.o 00:02:53.764 CC lib/env_dpdk/pci_virtio.o 00:02:53.765 CC lib/env_dpdk/pci_vmd.o 00:02:53.765 CC lib/env_dpdk/pci_idxd.o 00:02:53.765 CC lib/env_dpdk/pci_event.o 00:02:53.765 CC lib/env_dpdk/sigbus_handler.o 00:02:53.765 CC lib/env_dpdk/pci_dpdk.o 00:02:53.765 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:53.765 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:54.023 LIB libspdk_rdma_provider.a 00:02:54.023 SO libspdk_rdma_provider.so.6.0 00:02:54.023 LIB libspdk_rdma_utils.a 00:02:54.023 LIB libspdk_conf.a 00:02:54.023 SO libspdk_conf.so.6.0 00:02:54.023 SO libspdk_rdma_utils.so.1.0 00:02:54.023 LIB libspdk_json.a 00:02:54.023 SYMLINK libspdk_rdma_provider.so 00:02:54.023 SO libspdk_json.so.6.0 00:02:54.023 SYMLINK libspdk_conf.so 00:02:54.023 SYMLINK libspdk_rdma_utils.so 00:02:54.023 SYMLINK libspdk_json.so 00:02:54.280 LIB libspdk_vmd.a 00:02:54.280 LIB libspdk_idxd.a 00:02:54.280 SO libspdk_vmd.so.6.0 00:02:54.280 SO libspdk_idxd.so.12.0 00:02:54.280 LIB libspdk_reduce.a 00:02:54.280 SYMLINK libspdk_vmd.so 00:02:54.280 SO libspdk_reduce.so.6.1 00:02:54.280 SYMLINK libspdk_idxd.so 00:02:54.538 SYMLINK libspdk_reduce.so 00:02:54.538 CC lib/jsonrpc/jsonrpc_server.o 00:02:54.538 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:54.538 CC lib/jsonrpc/jsonrpc_client.o 00:02:54.538 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:54.796 LIB libspdk_jsonrpc.a 00:02:54.796 SO libspdk_jsonrpc.so.6.0 00:02:54.796 SYMLINK libspdk_jsonrpc.so 00:02:55.055 LIB libspdk_env_dpdk.a 00:02:55.055 SO libspdk_env_dpdk.so.15.0 00:02:55.313 SYMLINK libspdk_env_dpdk.so 00:02:55.313 CC lib/rpc/rpc.o 00:02:55.572 LIB libspdk_rpc.a 00:02:55.572 SO libspdk_rpc.so.6.0 00:02:55.572 SYMLINK libspdk_rpc.so 00:02:55.830 CC lib/trace/trace.o 00:02:55.830 CC lib/trace/trace_flags.o 00:02:55.830 CC lib/trace/trace_rpc.o 00:02:55.830 CC lib/notify/notify.o 00:02:55.830 CC lib/notify/notify_rpc.o 00:02:55.830 CC lib/keyring/keyring.o 00:02:55.830 CC lib/keyring/keyring_rpc.o 00:02:56.088 LIB libspdk_notify.a 00:02:56.088 SO libspdk_notify.so.6.0 00:02:56.088 LIB libspdk_trace.a 00:02:56.088 LIB libspdk_keyring.a 00:02:56.088 SO libspdk_keyring.so.1.0 00:02:56.347 SYMLINK libspdk_notify.so 00:02:56.347 SO libspdk_trace.so.10.0 00:02:56.347 SYMLINK libspdk_keyring.so 00:02:56.347 SYMLINK libspdk_trace.so 00:02:56.605 CC lib/thread/thread.o 00:02:56.605 CC lib/thread/iobuf.o 00:02:56.605 CC lib/sock/sock.o 00:02:56.605 CC lib/sock/sock_rpc.o 00:02:57.172 LIB libspdk_sock.a 00:02:57.172 SO libspdk_sock.so.10.0 00:02:57.172 SYMLINK libspdk_sock.so 00:02:57.739 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:57.739 CC lib/nvme/nvme_ctrlr.o 00:02:57.739 CC lib/nvme/nvme_fabric.o 00:02:57.739 CC lib/nvme/nvme_ns_cmd.o 00:02:57.739 CC lib/nvme/nvme_ns.o 00:02:57.739 CC lib/nvme/nvme_pcie_common.o 00:02:57.739 CC lib/nvme/nvme_pcie.o 00:02:57.739 CC lib/nvme/nvme_qpair.o 00:02:57.739 CC lib/nvme/nvme.o 00:02:57.739 CC lib/nvme/nvme_quirks.o 00:02:57.739 CC lib/nvme/nvme_transport.o 00:02:57.739 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:57.739 CC lib/nvme/nvme_discovery.o 00:02:57.739 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.739 CC lib/nvme/nvme_tcp.o 00:02:57.739 CC lib/nvme/nvme_opal.o 00:02:57.739 CC lib/nvme/nvme_io_msg.o 00:02:57.739 CC lib/nvme/nvme_poll_group.o 00:02:57.739 CC lib/nvme/nvme_zns.o 00:02:57.739 CC lib/nvme/nvme_stubs.o 00:02:57.739 CC lib/nvme/nvme_auth.o 00:02:57.739 CC lib/nvme/nvme_rdma.o 00:02:57.739 CC lib/nvme/nvme_cuse.o 00:02:57.997 LIB libspdk_thread.a 00:02:57.997 SO libspdk_thread.so.10.1 00:02:58.256 SYMLINK libspdk_thread.so 00:02:58.514 CC lib/accel/accel.o 00:02:58.514 CC lib/accel/accel_rpc.o 00:02:58.514 CC lib/accel/accel_sw.o 00:02:58.514 CC lib/init/json_config.o 00:02:58.514 CC lib/init/subsystem.o 00:02:58.514 CC lib/init/subsystem_rpc.o 00:02:58.514 CC lib/blob/blobstore.o 00:02:58.514 CC lib/init/rpc.o 00:02:58.514 CC lib/blob/request.o 00:02:58.514 CC lib/blob/zeroes.o 00:02:58.514 CC lib/blob/blob_bs_dev.o 00:02:58.514 CC lib/virtio/virtio.o 00:02:58.514 CC lib/virtio/virtio_vhost_user.o 00:02:58.514 CC lib/virtio/virtio_vfio_user.o 00:02:58.514 CC lib/virtio/virtio_pci.o 00:02:58.772 LIB libspdk_init.a 00:02:58.772 SO libspdk_init.so.5.0 00:02:58.772 LIB libspdk_virtio.a 00:02:59.031 SO libspdk_virtio.so.7.0 00:02:59.031 SYMLINK libspdk_init.so 00:02:59.031 SYMLINK libspdk_virtio.so 00:02:59.289 CC lib/event/app.o 00:02:59.289 CC lib/event/reactor.o 00:02:59.289 CC lib/event/log_rpc.o 00:02:59.289 CC lib/event/app_rpc.o 00:02:59.289 CC lib/event/scheduler_static.o 00:02:59.547 LIB libspdk_accel.a 00:02:59.547 SO libspdk_accel.so.16.0 00:02:59.547 LIB libspdk_nvme.a 00:02:59.547 SYMLINK libspdk_accel.so 00:02:59.806 LIB libspdk_event.a 00:02:59.806 SO libspdk_nvme.so.13.1 00:02:59.806 SO libspdk_event.so.14.0 00:02:59.806 SYMLINK libspdk_event.so 00:03:00.065 CC lib/bdev/bdev.o 00:03:00.065 CC lib/bdev/bdev_rpc.o 00:03:00.065 CC lib/bdev/bdev_zone.o 00:03:00.065 CC lib/bdev/part.o 00:03:00.065 CC lib/bdev/scsi_nvme.o 00:03:00.065 SYMLINK libspdk_nvme.so 00:03:01.445 LIB libspdk_blob.a 00:03:01.445 SO libspdk_blob.so.11.0 00:03:01.445 SYMLINK libspdk_blob.so 00:03:02.013 CC lib/lvol/lvol.o 00:03:02.013 CC lib/blobfs/blobfs.o 00:03:02.013 CC lib/blobfs/tree.o 00:03:02.582 LIB libspdk_bdev.a 00:03:02.582 SO libspdk_bdev.so.16.0 00:03:02.582 SYMLINK libspdk_bdev.so 00:03:02.582 LIB libspdk_blobfs.a 00:03:02.582 SO libspdk_blobfs.so.10.0 00:03:02.582 LIB libspdk_lvol.a 00:03:02.842 SYMLINK libspdk_blobfs.so 00:03:02.842 SO libspdk_lvol.so.10.0 00:03:02.842 SYMLINK libspdk_lvol.so 00:03:02.842 CC lib/ftl/ftl_core.o 00:03:02.842 CC lib/ftl/ftl_layout.o 00:03:02.842 CC lib/nbd/nbd.o 00:03:02.842 CC lib/ftl/ftl_init.o 00:03:02.842 CC lib/ftl/ftl_io.o 00:03:02.842 CC lib/nbd/nbd_rpc.o 00:03:02.842 CC lib/ftl/ftl_debug.o 00:03:02.842 CC lib/ublk/ublk.o 00:03:02.842 CC lib/ublk/ublk_rpc.o 00:03:02.842 CC lib/ftl/ftl_sb.o 00:03:02.842 CC lib/ftl/ftl_l2p.o 00:03:02.842 CC lib/ftl/ftl_l2p_flat.o 00:03:02.842 CC lib/ftl/ftl_nv_cache.o 00:03:02.842 CC lib/ftl/ftl_band.o 00:03:02.842 CC lib/ftl/ftl_band_ops.o 00:03:02.842 CC lib/ftl/ftl_writer.o 00:03:02.842 CC lib/ftl/ftl_rq.o 00:03:02.842 CC lib/nvmf/ctrlr.o 00:03:02.842 CC lib/ftl/ftl_reloc.o 00:03:02.842 CC lib/nvmf/ctrlr_discovery.o 00:03:02.842 CC lib/ftl/ftl_l2p_cache.o 00:03:02.842 CC lib/scsi/dev.o 00:03:02.842 CC lib/nvmf/ctrlr_bdev.o 00:03:02.842 CC lib/ftl/ftl_p2l.o 00:03:02.842 CC lib/nvmf/subsystem.o 00:03:02.842 CC lib/nvmf/nvmf_rpc.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.842 CC lib/scsi/lun.o 00:03:02.842 CC lib/nvmf/nvmf.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.842 CC lib/scsi/port.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.842 CC lib/scsi/scsi.o 00:03:02.842 CC lib/nvmf/transport.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.842 CC lib/scsi/scsi_bdev.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.842 CC lib/nvmf/tcp.o 00:03:02.842 CC lib/scsi/scsi_pr.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.842 CC lib/nvmf/stubs.o 00:03:02.842 CC lib/scsi/scsi_rpc.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.842 CC lib/nvmf/mdns_server.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.842 CC lib/scsi/task.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:02.842 CC lib/nvmf/rdma.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:02.842 CC lib/nvmf/auth.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:02.842 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:02.842 CC lib/ftl/utils/ftl_conf.o 00:03:02.842 CC lib/ftl/utils/ftl_md.o 00:03:02.842 CC lib/ftl/utils/ftl_mempool.o 00:03:02.842 CC lib/ftl/utils/ftl_bitmap.o 00:03:02.842 CC lib/ftl/utils/ftl_property.o 00:03:03.100 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:03.100 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:03.100 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:03.100 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.100 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.100 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.100 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:03.100 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.100 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.100 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.100 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:03.100 CC lib/ftl/base/ftl_base_dev.o 00:03:03.100 CC lib/ftl/base/ftl_base_bdev.o 00:03:03.100 CC lib/ftl/ftl_trace.o 00:03:03.666 LIB libspdk_nbd.a 00:03:03.666 SO libspdk_nbd.so.7.0 00:03:03.666 SYMLINK libspdk_nbd.so 00:03:03.666 LIB libspdk_scsi.a 00:03:03.666 LIB libspdk_ublk.a 00:03:03.666 SO libspdk_scsi.so.9.0 00:03:03.666 SO libspdk_ublk.so.3.0 00:03:03.925 SYMLINK libspdk_ublk.so 00:03:03.926 SYMLINK libspdk_scsi.so 00:03:03.926 LIB libspdk_ftl.a 00:03:04.184 CC lib/iscsi/conn.o 00:03:04.184 CC lib/iscsi/init_grp.o 00:03:04.184 CC lib/iscsi/iscsi.o 00:03:04.184 CC lib/iscsi/md5.o 00:03:04.184 CC lib/iscsi/param.o 00:03:04.184 CC lib/iscsi/portal_grp.o 00:03:04.184 CC lib/iscsi/iscsi_subsystem.o 00:03:04.184 CC lib/iscsi/tgt_node.o 00:03:04.185 CC lib/iscsi/iscsi_rpc.o 00:03:04.185 CC lib/iscsi/task.o 00:03:04.185 CC lib/vhost/vhost_rpc.o 00:03:04.185 CC lib/vhost/vhost.o 00:03:04.185 CC lib/vhost/vhost_blk.o 00:03:04.185 CC lib/vhost/vhost_scsi.o 00:03:04.185 CC lib/vhost/rte_vhost_user.o 00:03:04.185 SO libspdk_ftl.so.9.0 00:03:04.443 SYMLINK libspdk_ftl.so 00:03:05.012 LIB libspdk_nvmf.a 00:03:05.270 LIB libspdk_vhost.a 00:03:05.270 SO libspdk_nvmf.so.19.0 00:03:05.270 SO libspdk_vhost.so.8.0 00:03:05.270 SYMLINK libspdk_vhost.so 00:03:05.529 SYMLINK libspdk_nvmf.so 00:03:05.529 LIB libspdk_iscsi.a 00:03:05.529 SO libspdk_iscsi.so.8.0 00:03:05.787 SYMLINK libspdk_iscsi.so 00:03:06.356 CC module/env_dpdk/env_dpdk_rpc.o 00:03:06.356 LIB libspdk_env_dpdk_rpc.a 00:03:06.356 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.356 CC module/accel/ioat/accel_ioat.o 00:03:06.356 CC module/keyring/linux/keyring_rpc.o 00:03:06.356 CC module/blob/bdev/blob_bdev.o 00:03:06.356 CC module/keyring/linux/keyring.o 00:03:06.356 CC module/sock/posix/posix.o 00:03:06.356 CC module/keyring/file/keyring_rpc.o 00:03:06.356 CC module/scheduler/gscheduler/gscheduler.o 00:03:06.356 CC module/keyring/file/keyring.o 00:03:06.356 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:03:06.356 SO libspdk_env_dpdk_rpc.so.6.0 00:03:06.356 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:03:06.356 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:06.356 CC module/accel/iaa/accel_iaa.o 00:03:06.356 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:06.356 CC module/accel/error/accel_error.o 00:03:06.356 CC module/accel/error/accel_error_rpc.o 00:03:06.356 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:03:06.356 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.356 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:03:06.356 CC module/accel/dsa/accel_dsa.o 00:03:06.357 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.623 SYMLINK libspdk_env_dpdk_rpc.so 00:03:06.623 LIB libspdk_keyring_linux.a 00:03:06.623 LIB libspdk_keyring_file.a 00:03:06.623 LIB libspdk_scheduler_gscheduler.a 00:03:06.623 SO libspdk_keyring_linux.so.1.0 00:03:06.623 LIB libspdk_accel_ioat.a 00:03:06.623 LIB libspdk_scheduler_dpdk_governor.a 00:03:06.623 SO libspdk_keyring_file.so.1.0 00:03:06.623 LIB libspdk_accel_error.a 00:03:06.623 SO libspdk_scheduler_gscheduler.so.4.0 00:03:06.623 LIB libspdk_accel_iaa.a 00:03:06.623 SO libspdk_accel_ioat.so.6.0 00:03:06.623 LIB libspdk_scheduler_dynamic.a 00:03:06.623 LIB libspdk_accel_dsa.a 00:03:06.623 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:06.623 SO libspdk_accel_error.so.2.0 00:03:06.623 SYMLINK libspdk_keyring_linux.so 00:03:06.623 SO libspdk_accel_iaa.so.3.0 00:03:06.623 SO libspdk_accel_dsa.so.5.0 00:03:06.623 LIB libspdk_blob_bdev.a 00:03:06.623 SYMLINK libspdk_keyring_file.so 00:03:06.623 SO libspdk_scheduler_dynamic.so.4.0 00:03:06.918 SYMLINK libspdk_scheduler_gscheduler.so 00:03:06.918 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:06.918 SYMLINK libspdk_accel_ioat.so 00:03:06.918 SO libspdk_blob_bdev.so.11.0 00:03:06.918 SYMLINK libspdk_accel_error.so 00:03:06.918 SYMLINK libspdk_accel_iaa.so 00:03:06.918 SYMLINK libspdk_scheduler_dynamic.so 00:03:06.918 SYMLINK libspdk_accel_dsa.so 00:03:06.918 SYMLINK libspdk_blob_bdev.so 00:03:07.177 LIB libspdk_sock_posix.a 00:03:07.177 SO libspdk_sock_posix.so.6.0 00:03:07.177 SYMLINK libspdk_sock_posix.so 00:03:07.436 CC module/bdev/error/vbdev_error.o 00:03:07.436 CC module/bdev/error/vbdev_error_rpc.o 00:03:07.436 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.436 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:07.436 CC module/bdev/malloc/bdev_malloc.o 00:03:07.436 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.436 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.436 CC module/bdev/null/bdev_null.o 00:03:07.436 CC module/bdev/delay/vbdev_delay.o 00:03:07.436 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.436 CC module/bdev/null/bdev_null_rpc.o 00:03:07.436 CC module/bdev/gpt/vbdev_gpt.o 00:03:07.436 CC module/bdev/crypto/vbdev_crypto.o 00:03:07.436 CC module/bdev/gpt/gpt.o 00:03:07.436 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.436 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:07.436 CC module/bdev/iscsi/bdev_iscsi.o 00:03:07.436 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:07.436 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:03:07.436 CC module/bdev/compress/vbdev_compress.o 00:03:07.436 CC module/bdev/compress/vbdev_compress_rpc.o 00:03:07.436 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:07.436 CC module/bdev/nvme/bdev_nvme.o 00:03:07.436 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:07.436 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.436 CC module/blobfs/bdev/blobfs_bdev.o 00:03:07.436 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.436 CC module/bdev/nvme/nvme_rpc.o 00:03:07.436 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:07.436 CC module/bdev/aio/bdev_aio_rpc.o 00:03:07.436 CC module/bdev/aio/bdev_aio.o 00:03:07.436 CC module/bdev/nvme/vbdev_opal.o 00:03:07.436 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.436 CC module/bdev/lvol/vbdev_lvol.o 00:03:07.436 CC module/bdev/split/vbdev_split.o 00:03:07.436 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:07.436 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:07.436 CC module/bdev/split/vbdev_split_rpc.o 00:03:07.436 CC module/bdev/ftl/bdev_ftl.o 00:03:07.436 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:07.436 CC module/bdev/raid/bdev_raid.o 00:03:07.436 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.436 CC module/bdev/raid/bdev_raid_sb.o 00:03:07.436 CC module/bdev/raid/raid0.o 00:03:07.436 CC module/bdev/raid/raid1.o 00:03:07.436 CC module/bdev/raid/concat.o 00:03:07.436 LIB libspdk_accel_dpdk_compressdev.a 00:03:07.436 SO libspdk_accel_dpdk_compressdev.so.3.0 00:03:07.695 LIB libspdk_bdev_error.a 00:03:07.695 SYMLINK libspdk_accel_dpdk_compressdev.so 00:03:07.695 LIB libspdk_blobfs_bdev.a 00:03:07.695 SO libspdk_bdev_error.so.6.0 00:03:07.695 LIB libspdk_bdev_split.a 00:03:07.695 SO libspdk_blobfs_bdev.so.6.0 00:03:07.695 SO libspdk_bdev_split.so.6.0 00:03:07.695 LIB libspdk_accel_dpdk_cryptodev.a 00:03:07.695 LIB libspdk_bdev_passthru.a 00:03:07.695 LIB libspdk_bdev_null.a 00:03:07.695 SYMLINK libspdk_bdev_error.so 00:03:07.695 LIB libspdk_bdev_gpt.a 00:03:07.695 SO libspdk_bdev_passthru.so.6.0 00:03:07.695 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:03:07.696 SYMLINK libspdk_bdev_split.so 00:03:07.696 SO libspdk_bdev_null.so.6.0 00:03:07.696 SYMLINK libspdk_blobfs_bdev.so 00:03:07.696 LIB libspdk_bdev_ftl.a 00:03:07.696 SO libspdk_bdev_gpt.so.6.0 00:03:07.955 LIB libspdk_bdev_aio.a 00:03:07.955 LIB libspdk_bdev_crypto.a 00:03:07.955 LIB libspdk_bdev_delay.a 00:03:07.955 LIB libspdk_bdev_zone_block.a 00:03:07.955 SO libspdk_bdev_ftl.so.6.0 00:03:07.955 LIB libspdk_bdev_iscsi.a 00:03:07.955 LIB libspdk_bdev_malloc.a 00:03:07.955 SYMLINK libspdk_bdev_passthru.so 00:03:07.955 SO libspdk_bdev_aio.so.6.0 00:03:07.955 LIB libspdk_bdev_compress.a 00:03:07.955 SO libspdk_bdev_crypto.so.6.0 00:03:07.955 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:03:07.955 SYMLINK libspdk_bdev_null.so 00:03:07.955 SO libspdk_bdev_malloc.so.6.0 00:03:07.955 SO libspdk_bdev_delay.so.6.0 00:03:07.955 SYMLINK libspdk_bdev_gpt.so 00:03:07.955 SO libspdk_bdev_zone_block.so.6.0 00:03:07.955 SO libspdk_bdev_iscsi.so.6.0 00:03:07.955 SO libspdk_bdev_compress.so.6.0 00:03:07.955 SYMLINK libspdk_bdev_ftl.so 00:03:07.955 SYMLINK libspdk_bdev_crypto.so 00:03:07.955 SYMLINK libspdk_bdev_aio.so 00:03:07.955 SYMLINK libspdk_bdev_malloc.so 00:03:07.955 SYMLINK libspdk_bdev_delay.so 00:03:07.955 SYMLINK libspdk_bdev_zone_block.so 00:03:07.955 LIB libspdk_bdev_virtio.a 00:03:07.955 SYMLINK libspdk_bdev_iscsi.so 00:03:07.955 LIB libspdk_bdev_lvol.a 00:03:07.955 SYMLINK libspdk_bdev_compress.so 00:03:07.955 SO libspdk_bdev_virtio.so.6.0 00:03:07.955 SO libspdk_bdev_lvol.so.6.0 00:03:08.214 SYMLINK libspdk_bdev_virtio.so 00:03:08.214 SYMLINK libspdk_bdev_lvol.so 00:03:08.474 LIB libspdk_bdev_raid.a 00:03:08.474 SO libspdk_bdev_raid.so.6.0 00:03:08.733 SYMLINK libspdk_bdev_raid.so 00:03:09.671 LIB libspdk_bdev_nvme.a 00:03:09.671 SO libspdk_bdev_nvme.so.7.0 00:03:09.671 SYMLINK libspdk_bdev_nvme.so 00:03:10.608 CC module/event/subsystems/vmd/vmd.o 00:03:10.608 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:10.608 CC module/event/subsystems/sock/sock.o 00:03:10.608 CC module/event/subsystems/scheduler/scheduler.o 00:03:10.608 CC module/event/subsystems/keyring/keyring.o 00:03:10.608 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:10.608 CC module/event/subsystems/iobuf/iobuf.o 00:03:10.608 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:10.608 LIB libspdk_event_vmd.a 00:03:10.608 LIB libspdk_event_keyring.a 00:03:10.608 LIB libspdk_event_scheduler.a 00:03:10.608 LIB libspdk_event_sock.a 00:03:10.608 LIB libspdk_event_iobuf.a 00:03:10.608 LIB libspdk_event_vhost_blk.a 00:03:10.608 SO libspdk_event_vmd.so.6.0 00:03:10.608 SO libspdk_event_keyring.so.1.0 00:03:10.608 SO libspdk_event_scheduler.so.4.0 00:03:10.608 SO libspdk_event_sock.so.5.0 00:03:10.608 SO libspdk_event_iobuf.so.3.0 00:03:10.868 SO libspdk_event_vhost_blk.so.3.0 00:03:10.868 SYMLINK libspdk_event_keyring.so 00:03:10.868 SYMLINK libspdk_event_vmd.so 00:03:10.868 SYMLINK libspdk_event_scheduler.so 00:03:10.868 SYMLINK libspdk_event_sock.so 00:03:10.868 SYMLINK libspdk_event_iobuf.so 00:03:10.868 SYMLINK libspdk_event_vhost_blk.so 00:03:11.128 CC module/event/subsystems/accel/accel.o 00:03:11.388 LIB libspdk_event_accel.a 00:03:11.388 SO libspdk_event_accel.so.6.0 00:03:11.388 SYMLINK libspdk_event_accel.so 00:03:11.958 CC module/event/subsystems/bdev/bdev.o 00:03:11.958 LIB libspdk_event_bdev.a 00:03:12.218 SO libspdk_event_bdev.so.6.0 00:03:12.218 SYMLINK libspdk_event_bdev.so 00:03:12.478 CC module/event/subsystems/scsi/scsi.o 00:03:12.478 CC module/event/subsystems/ublk/ublk.o 00:03:12.478 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.478 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.478 CC module/event/subsystems/nbd/nbd.o 00:03:12.738 LIB libspdk_event_scsi.a 00:03:12.738 LIB libspdk_event_nbd.a 00:03:12.738 SO libspdk_event_scsi.so.6.0 00:03:12.738 SO libspdk_event_nbd.so.6.0 00:03:12.738 LIB libspdk_event_nvmf.a 00:03:12.738 SYMLINK libspdk_event_scsi.so 00:03:12.738 SYMLINK libspdk_event_nbd.so 00:03:12.738 LIB libspdk_event_ublk.a 00:03:12.738 SO libspdk_event_nvmf.so.6.0 00:03:12.998 SO libspdk_event_ublk.so.3.0 00:03:12.998 SYMLINK libspdk_event_nvmf.so 00:03:12.998 SYMLINK libspdk_event_ublk.so 00:03:13.257 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:13.257 CC module/event/subsystems/iscsi/iscsi.o 00:03:13.257 LIB libspdk_event_vhost_scsi.a 00:03:13.257 LIB libspdk_event_iscsi.a 00:03:13.516 SO libspdk_event_vhost_scsi.so.3.0 00:03:13.516 SO libspdk_event_iscsi.so.6.0 00:03:13.516 SYMLINK libspdk_event_vhost_scsi.so 00:03:13.516 SYMLINK libspdk_event_iscsi.so 00:03:13.775 SO libspdk.so.6.0 00:03:13.775 SYMLINK libspdk.so 00:03:14.035 TEST_HEADER include/spdk/accel.h 00:03:14.035 TEST_HEADER include/spdk/barrier.h 00:03:14.035 TEST_HEADER include/spdk/accel_module.h 00:03:14.035 TEST_HEADER include/spdk/assert.h 00:03:14.035 TEST_HEADER include/spdk/bdev_module.h 00:03:14.035 TEST_HEADER include/spdk/base64.h 00:03:14.035 TEST_HEADER include/spdk/bdev.h 00:03:14.035 TEST_HEADER include/spdk/bit_array.h 00:03:14.035 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.035 TEST_HEADER include/spdk/bit_pool.h 00:03:14.035 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.035 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.035 TEST_HEADER include/spdk/blobfs.h 00:03:14.035 CC test/rpc_client/rpc_client_test.o 00:03:14.035 TEST_HEADER include/spdk/conf.h 00:03:14.035 TEST_HEADER include/spdk/blob.h 00:03:14.035 TEST_HEADER include/spdk/cpuset.h 00:03:14.035 TEST_HEADER include/spdk/config.h 00:03:14.035 CC app/spdk_top/spdk_top.o 00:03:14.035 TEST_HEADER include/spdk/crc16.h 00:03:14.035 TEST_HEADER include/spdk/crc32.h 00:03:14.035 TEST_HEADER include/spdk/crc64.h 00:03:14.035 TEST_HEADER include/spdk/dif.h 00:03:14.035 TEST_HEADER include/spdk/dma.h 00:03:14.035 TEST_HEADER include/spdk/endian.h 00:03:14.035 TEST_HEADER include/spdk/env.h 00:03:14.035 CXX app/trace/trace.o 00:03:14.035 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.035 TEST_HEADER include/spdk/event.h 00:03:14.035 TEST_HEADER include/spdk/fd.h 00:03:14.035 TEST_HEADER include/spdk/fd_group.h 00:03:14.035 TEST_HEADER include/spdk/ftl.h 00:03:14.035 TEST_HEADER include/spdk/file.h 00:03:14.035 TEST_HEADER include/spdk/hexlify.h 00:03:14.035 CC app/spdk_nvme_discover/discovery_aer.o 00:03:14.035 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.035 TEST_HEADER include/spdk/histogram_data.h 00:03:14.035 CC app/spdk_lspci/spdk_lspci.o 00:03:14.035 CC app/spdk_nvme_identify/identify.o 00:03:14.035 TEST_HEADER include/spdk/idxd.h 00:03:14.035 CC app/spdk_nvme_perf/perf.o 00:03:14.035 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.035 TEST_HEADER include/spdk/ioat.h 00:03:14.035 TEST_HEADER include/spdk/init.h 00:03:14.035 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.035 TEST_HEADER include/spdk/json.h 00:03:14.035 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.035 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.035 TEST_HEADER include/spdk/keyring.h 00:03:14.035 TEST_HEADER include/spdk/keyring_module.h 00:03:14.035 TEST_HEADER include/spdk/likely.h 00:03:14.035 TEST_HEADER include/spdk/lvol.h 00:03:14.035 TEST_HEADER include/spdk/log.h 00:03:14.035 TEST_HEADER include/spdk/mmio.h 00:03:14.035 TEST_HEADER include/spdk/memory.h 00:03:14.035 CC app/trace_record/trace_record.o 00:03:14.035 TEST_HEADER include/spdk/nbd.h 00:03:14.035 TEST_HEADER include/spdk/net.h 00:03:14.035 TEST_HEADER include/spdk/notify.h 00:03:14.035 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.035 TEST_HEADER include/spdk/nvme.h 00:03:14.035 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.035 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.035 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.035 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.035 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.035 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.035 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.035 TEST_HEADER include/spdk/nvmf.h 00:03:14.035 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.035 TEST_HEADER include/spdk/opal_spec.h 00:03:14.035 TEST_HEADER include/spdk/opal.h 00:03:14.035 TEST_HEADER include/spdk/pipe.h 00:03:14.035 TEST_HEADER include/spdk/pci_ids.h 00:03:14.035 TEST_HEADER include/spdk/queue.h 00:03:14.035 TEST_HEADER include/spdk/rpc.h 00:03:14.035 TEST_HEADER include/spdk/reduce.h 00:03:14.035 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.035 TEST_HEADER include/spdk/scheduler.h 00:03:14.035 TEST_HEADER include/spdk/scsi.h 00:03:14.035 TEST_HEADER include/spdk/stdinc.h 00:03:14.035 TEST_HEADER include/spdk/sock.h 00:03:14.035 TEST_HEADER include/spdk/string.h 00:03:14.035 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.035 TEST_HEADER include/spdk/thread.h 00:03:14.035 TEST_HEADER include/spdk/trace_parser.h 00:03:14.035 TEST_HEADER include/spdk/trace.h 00:03:14.035 TEST_HEADER include/spdk/tree.h 00:03:14.035 TEST_HEADER include/spdk/ublk.h 00:03:14.035 TEST_HEADER include/spdk/util.h 00:03:14.035 CC app/iscsi_tgt/iscsi_tgt.o 00:03:14.035 TEST_HEADER include/spdk/version.h 00:03:14.035 TEST_HEADER include/spdk/uuid.h 00:03:14.035 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.035 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.035 TEST_HEADER include/spdk/vhost.h 00:03:14.035 TEST_HEADER include/spdk/vmd.h 00:03:14.035 TEST_HEADER include/spdk/zipf.h 00:03:14.035 TEST_HEADER include/spdk/xor.h 00:03:14.035 CXX test/cpp_headers/accel_module.o 00:03:14.035 CXX test/cpp_headers/accel.o 00:03:14.035 CXX test/cpp_headers/assert.o 00:03:14.035 CXX test/cpp_headers/base64.o 00:03:14.035 CXX test/cpp_headers/barrier.o 00:03:14.035 CXX test/cpp_headers/bdev.o 00:03:14.035 CXX test/cpp_headers/bdev_module.o 00:03:14.035 CXX test/cpp_headers/bdev_zone.o 00:03:14.035 CXX test/cpp_headers/blobfs_bdev.o 00:03:14.035 CXX test/cpp_headers/bit_array.o 00:03:14.035 CXX test/cpp_headers/blob_bdev.o 00:03:14.035 CXX test/cpp_headers/bit_pool.o 00:03:14.035 CXX test/cpp_headers/blob.o 00:03:14.035 CXX test/cpp_headers/blobfs.o 00:03:14.035 CXX test/cpp_headers/config.o 00:03:14.035 CXX test/cpp_headers/conf.o 00:03:14.035 CXX test/cpp_headers/cpuset.o 00:03:14.035 CXX test/cpp_headers/crc32.o 00:03:14.035 CXX test/cpp_headers/crc16.o 00:03:14.035 CXX test/cpp_headers/crc64.o 00:03:14.312 CXX test/cpp_headers/dma.o 00:03:14.312 CXX test/cpp_headers/endian.o 00:03:14.312 CXX test/cpp_headers/dif.o 00:03:14.312 CXX test/cpp_headers/env_dpdk.o 00:03:14.312 CXX test/cpp_headers/event.o 00:03:14.312 CXX test/cpp_headers/env.o 00:03:14.312 CC app/spdk_tgt/spdk_tgt.o 00:03:14.312 CXX test/cpp_headers/fd_group.o 00:03:14.312 CXX test/cpp_headers/fd.o 00:03:14.312 CXX test/cpp_headers/file.o 00:03:14.312 CXX test/cpp_headers/ftl.o 00:03:14.312 CXX test/cpp_headers/gpt_spec.o 00:03:14.312 CXX test/cpp_headers/hexlify.o 00:03:14.312 CXX test/cpp_headers/idxd.o 00:03:14.312 CXX test/cpp_headers/histogram_data.o 00:03:14.312 CC app/spdk_dd/spdk_dd.o 00:03:14.312 CXX test/cpp_headers/ioat.o 00:03:14.312 CXX test/cpp_headers/idxd_spec.o 00:03:14.312 CXX test/cpp_headers/ioat_spec.o 00:03:14.312 CXX test/cpp_headers/init.o 00:03:14.312 CXX test/cpp_headers/json.o 00:03:14.312 CXX test/cpp_headers/iscsi_spec.o 00:03:14.312 CXX test/cpp_headers/jsonrpc.o 00:03:14.312 CXX test/cpp_headers/keyring.o 00:03:14.312 CXX test/cpp_headers/keyring_module.o 00:03:14.312 CXX test/cpp_headers/likely.o 00:03:14.312 CXX test/cpp_headers/log.o 00:03:14.312 CXX test/cpp_headers/lvol.o 00:03:14.312 CXX test/cpp_headers/mmio.o 00:03:14.312 CC app/nvmf_tgt/nvmf_main.o 00:03:14.312 CXX test/cpp_headers/nbd.o 00:03:14.312 CXX test/cpp_headers/memory.o 00:03:14.312 CXX test/cpp_headers/net.o 00:03:14.312 CXX test/cpp_headers/notify.o 00:03:14.312 CXX test/cpp_headers/nvme.o 00:03:14.312 CXX test/cpp_headers/nvme_intel.o 00:03:14.312 CXX test/cpp_headers/nvme_ocssd.o 00:03:14.312 CXX test/cpp_headers/nvme_spec.o 00:03:14.312 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:14.312 CXX test/cpp_headers/nvmf_cmd.o 00:03:14.312 CXX test/cpp_headers/nvme_zns.o 00:03:14.312 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:14.312 CXX test/cpp_headers/nvmf_spec.o 00:03:14.312 CXX test/cpp_headers/nvmf.o 00:03:14.312 CXX test/cpp_headers/nvmf_transport.o 00:03:14.312 CXX test/cpp_headers/opal.o 00:03:14.312 CXX test/cpp_headers/opal_spec.o 00:03:14.312 CXX test/cpp_headers/pci_ids.o 00:03:14.312 CXX test/cpp_headers/pipe.o 00:03:14.312 CXX test/cpp_headers/queue.o 00:03:14.312 CXX test/cpp_headers/reduce.o 00:03:14.312 CXX test/cpp_headers/rpc.o 00:03:14.312 CXX test/cpp_headers/scheduler.o 00:03:14.312 CXX test/cpp_headers/scsi.o 00:03:14.312 CXX test/cpp_headers/scsi_spec.o 00:03:14.312 CXX test/cpp_headers/sock.o 00:03:14.312 CXX test/cpp_headers/stdinc.o 00:03:14.312 CXX test/cpp_headers/string.o 00:03:14.312 CXX test/cpp_headers/thread.o 00:03:14.312 CXX test/cpp_headers/trace.o 00:03:14.312 CXX test/cpp_headers/trace_parser.o 00:03:14.312 CXX test/cpp_headers/tree.o 00:03:14.312 CXX test/cpp_headers/ublk.o 00:03:14.312 CXX test/cpp_headers/util.o 00:03:14.312 CXX test/cpp_headers/uuid.o 00:03:14.312 CXX test/cpp_headers/version.o 00:03:14.312 CC test/app/histogram_perf/histogram_perf.o 00:03:14.312 CC examples/util/zipf/zipf.o 00:03:14.312 CC test/thread/poller_perf/poller_perf.o 00:03:14.312 CC test/env/vtophys/vtophys.o 00:03:14.312 CC test/app/jsoncat/jsoncat.o 00:03:14.312 CC test/env/pci/pci_ut.o 00:03:14.312 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.312 CC test/env/memory/memory_ut.o 00:03:14.312 CXX test/cpp_headers/vfio_user_pci.o 00:03:14.312 CC examples/ioat/verify/verify.o 00:03:14.312 CC examples/ioat/perf/perf.o 00:03:14.312 CC app/fio/nvme/fio_plugin.o 00:03:14.592 CC test/app/stub/stub.o 00:03:14.592 CC test/dma/test_dma/test_dma.o 00:03:14.592 CXX test/cpp_headers/vfio_user_spec.o 00:03:14.592 CC app/fio/bdev/fio_plugin.o 00:03:14.592 CC test/app/bdev_svc/bdev_svc.o 00:03:14.592 LINK spdk_lspci 00:03:14.863 LINK rpc_client_test 00:03:14.863 LINK spdk_nvme_discover 00:03:15.123 CC test/env/mem_callbacks/mem_callbacks.o 00:03:15.123 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.123 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:15.123 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:15.123 LINK spdk_trace_record 00:03:15.123 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:15.123 LINK jsoncat 00:03:15.123 CXX test/cpp_headers/vhost.o 00:03:15.123 LINK zipf 00:03:15.123 CXX test/cpp_headers/vmd.o 00:03:15.123 LINK histogram_perf 00:03:15.123 CXX test/cpp_headers/xor.o 00:03:15.123 CXX test/cpp_headers/zipf.o 00:03:15.123 LINK nvmf_tgt 00:03:15.123 LINK vtophys 00:03:15.123 LINK iscsi_tgt 00:03:15.123 LINK env_dpdk_post_init 00:03:15.123 LINK poller_perf 00:03:15.123 LINK spdk_tgt 00:03:15.123 LINK interrupt_tgt 00:03:15.123 LINK stub 00:03:15.123 LINK bdev_svc 00:03:15.123 LINK verify 00:03:15.123 LINK ioat_perf 00:03:15.123 LINK spdk_trace 00:03:15.382 LINK spdk_dd 00:03:15.382 LINK pci_ut 00:03:15.382 LINK test_dma 00:03:15.382 LINK nvme_fuzz 00:03:15.676 LINK vhost_fuzz 00:03:15.676 LINK spdk_nvme 00:03:15.676 LINK spdk_bdev 00:03:15.676 CC examples/sock/hello_world/hello_sock.o 00:03:15.676 LINK spdk_nvme_perf 00:03:15.676 CC examples/vmd/led/led.o 00:03:15.676 CC examples/idxd/perf/perf.o 00:03:15.676 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.676 LINK spdk_nvme_identify 00:03:15.676 CC test/event/reactor/reactor.o 00:03:15.676 CC test/event/reactor_perf/reactor_perf.o 00:03:15.676 CC app/vhost/vhost.o 00:03:15.676 CC test/event/event_perf/event_perf.o 00:03:15.676 CC examples/thread/thread/thread_ex.o 00:03:15.676 CC test/event/app_repeat/app_repeat.o 00:03:15.676 LINK spdk_top 00:03:15.676 LINK mem_callbacks 00:03:15.676 CC test/event/scheduler/scheduler.o 00:03:15.942 LINK lsvmd 00:03:15.942 LINK led 00:03:15.942 LINK event_perf 00:03:15.942 LINK reactor 00:03:15.942 LINK reactor_perf 00:03:15.942 LINK hello_sock 00:03:15.942 LINK app_repeat 00:03:15.942 LINK vhost 00:03:15.942 LINK thread 00:03:15.942 CC test/nvme/boot_partition/boot_partition.o 00:03:15.942 CC test/nvme/reset/reset.o 00:03:15.942 CC test/nvme/simple_copy/simple_copy.o 00:03:15.942 CC test/nvme/reserve/reserve.o 00:03:15.942 CC test/nvme/startup/startup.o 00:03:15.942 CC test/nvme/cuse/cuse.o 00:03:15.942 CC test/nvme/fdp/fdp.o 00:03:15.942 CC test/nvme/overhead/overhead.o 00:03:15.942 CC test/nvme/err_injection/err_injection.o 00:03:15.942 CC test/nvme/compliance/nvme_compliance.o 00:03:15.942 CC test/nvme/aer/aer.o 00:03:15.942 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:15.942 CC test/nvme/e2edp/nvme_dp.o 00:03:15.942 CC test/nvme/connect_stress/connect_stress.o 00:03:15.942 LINK idxd_perf 00:03:15.942 LINK scheduler 00:03:15.942 CC test/nvme/fused_ordering/fused_ordering.o 00:03:15.942 CC test/nvme/sgl/sgl.o 00:03:15.942 CC test/blobfs/mkfs/mkfs.o 00:03:15.942 CC test/accel/dif/dif.o 00:03:16.206 LINK memory_ut 00:03:16.206 CC test/lvol/esnap/esnap.o 00:03:16.206 LINK boot_partition 00:03:16.206 LINK err_injection 00:03:16.206 LINK doorbell_aers 00:03:16.206 LINK connect_stress 00:03:16.206 LINK reserve 00:03:16.206 LINK nvme_dp 00:03:16.206 LINK startup 00:03:16.206 LINK fused_ordering 00:03:16.206 LINK simple_copy 00:03:16.206 LINK mkfs 00:03:16.206 LINK reset 00:03:16.206 LINK aer 00:03:16.206 LINK overhead 00:03:16.206 LINK sgl 00:03:16.464 LINK iscsi_fuzz 00:03:16.464 LINK nvme_compliance 00:03:16.464 LINK fdp 00:03:16.464 CC examples/nvme/hello_world/hello_world.o 00:03:16.464 CC examples/nvme/reconnect/reconnect.o 00:03:16.464 CC examples/nvme/arbitration/arbitration.o 00:03:16.464 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.464 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:16.464 CC examples/nvme/abort/abort.o 00:03:16.464 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.464 CC examples/nvme/hotplug/hotplug.o 00:03:16.464 CC examples/accel/perf/accel_perf.o 00:03:16.464 LINK dif 00:03:16.722 CC examples/blob/cli/blobcli.o 00:03:16.722 CC examples/blob/hello_world/hello_blob.o 00:03:16.722 LINK cmb_copy 00:03:16.722 LINK pmr_persistence 00:03:16.722 LINK hello_world 00:03:16.722 LINK hotplug 00:03:16.722 LINK reconnect 00:03:16.722 LINK arbitration 00:03:16.722 LINK abort 00:03:16.980 LINK hello_blob 00:03:16.980 LINK nvme_manage 00:03:16.980 LINK accel_perf 00:03:17.238 LINK blobcli 00:03:17.238 CC test/bdev/bdevio/bdevio.o 00:03:17.238 LINK cuse 00:03:17.497 CC examples/bdev/hello_world/hello_bdev.o 00:03:17.497 CC examples/bdev/bdevperf/bdevperf.o 00:03:17.755 LINK bdevio 00:03:17.755 LINK hello_bdev 00:03:18.322 LINK bdevperf 00:03:18.889 CC examples/nvmf/nvmf/nvmf.o 00:03:19.455 LINK nvmf 00:03:20.901 LINK esnap 00:03:21.159 00:03:21.159 real 1m25.969s 00:03:21.159 user 15m33.041s 00:03:21.159 sys 5m33.011s 00:03:21.159 13:03:31 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:21.159 13:03:31 make -- common/autotest_common.sh@10 -- $ set +x 00:03:21.159 ************************************ 00:03:21.159 END TEST make 00:03:21.159 ************************************ 00:03:21.159 13:03:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:21.159 13:03:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:21.159 13:03:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:21.159 13:03:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.159 13:03:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:21.159 13:03:31 -- pm/common@44 -- $ pid=638482 00:03:21.159 13:03:31 -- pm/common@50 -- $ kill -TERM 638482 00:03:21.159 13:03:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.159 13:03:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:21.159 13:03:31 -- pm/common@44 -- $ pid=638484 00:03:21.159 13:03:31 -- pm/common@50 -- $ kill -TERM 638484 00:03:21.159 13:03:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.159 13:03:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:21.159 13:03:31 -- pm/common@44 -- $ pid=638486 00:03:21.159 13:03:31 -- pm/common@50 -- $ kill -TERM 638486 00:03:21.159 13:03:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.159 13:03:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:21.159 13:03:31 -- pm/common@44 -- $ pid=638508 00:03:21.159 13:03:31 -- pm/common@50 -- $ sudo -E kill -TERM 638508 00:03:21.417 13:03:31 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:03:21.417 13:03:31 -- nvmf/common.sh@7 -- # uname -s 00:03:21.417 13:03:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:21.417 13:03:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:21.417 13:03:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:21.417 13:03:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:21.417 13:03:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:21.417 13:03:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:21.417 13:03:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:21.418 13:03:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:21.418 13:03:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:21.418 13:03:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:21.418 13:03:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:03:21.418 13:03:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:03:21.418 13:03:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:21.418 13:03:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:21.418 13:03:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:21.418 13:03:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:21.418 13:03:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:03:21.418 13:03:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:21.418 13:03:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.418 13:03:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.418 13:03:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.418 13:03:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.418 13:03:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.418 13:03:31 -- paths/export.sh@5 -- # export PATH 00:03:21.418 13:03:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.418 13:03:31 -- nvmf/common.sh@47 -- # : 0 00:03:21.418 13:03:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:21.418 13:03:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:21.418 13:03:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:21.418 13:03:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:21.418 13:03:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:21.418 13:03:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:21.418 13:03:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:21.418 13:03:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:21.418 13:03:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:21.418 13:03:31 -- spdk/autotest.sh@32 -- # uname -s 00:03:21.418 13:03:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:21.418 13:03:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:21.418 13:03:31 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:03:21.418 13:03:31 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:21.418 13:03:31 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:03:21.418 13:03:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:21.418 13:03:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:21.418 13:03:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:21.418 13:03:31 -- spdk/autotest.sh@48 -- # udevadm_pid=709474 00:03:21.418 13:03:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:21.418 13:03:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:21.418 13:03:31 -- pm/common@17 -- # local monitor 00:03:21.418 13:03:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.418 13:03:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.418 13:03:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.418 13:03:31 -- pm/common@21 -- # date +%s 00:03:21.418 13:03:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.418 13:03:31 -- pm/common@21 -- # date +%s 00:03:21.418 13:03:31 -- pm/common@25 -- # sleep 1 00:03:21.418 13:03:31 -- pm/common@21 -- # date +%s 00:03:21.418 13:03:31 -- pm/common@21 -- # date +%s 00:03:21.418 13:03:31 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721905411 00:03:21.418 13:03:31 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721905411 00:03:21.418 13:03:31 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721905411 00:03:21.418 13:03:31 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721905411 00:03:21.418 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721905411_collect-vmstat.pm.log 00:03:21.418 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721905411_collect-cpu-load.pm.log 00:03:21.418 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721905411_collect-cpu-temp.pm.log 00:03:21.418 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721905411_collect-bmc-pm.bmc.pm.log 00:03:22.356 13:03:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.356 13:03:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:22.356 13:03:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:22.356 13:03:32 -- common/autotest_common.sh@10 -- # set +x 00:03:22.615 13:03:32 -- spdk/autotest.sh@59 -- # create_test_list 00:03:22.615 13:03:32 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:22.615 13:03:32 -- common/autotest_common.sh@10 -- # set +x 00:03:22.615 13:03:32 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:03:22.615 13:03:32 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:22.615 13:03:32 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:22.615 13:03:32 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:03:22.615 13:03:32 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:22.615 13:03:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:22.615 13:03:32 -- common/autotest_common.sh@1455 -- # uname 00:03:22.615 13:03:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:22.615 13:03:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:22.615 13:03:32 -- common/autotest_common.sh@1475 -- # uname 00:03:22.615 13:03:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:22.615 13:03:32 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:22.615 13:03:32 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:22.615 13:03:32 -- spdk/autotest.sh@72 -- # hash lcov 00:03:22.615 13:03:32 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:22.615 13:03:32 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:22.615 --rc lcov_branch_coverage=1 00:03:22.615 --rc lcov_function_coverage=1 00:03:22.615 --rc genhtml_branch_coverage=1 00:03:22.616 --rc genhtml_function_coverage=1 00:03:22.616 --rc genhtml_legend=1 00:03:22.616 --rc geninfo_all_blocks=1 00:03:22.616 ' 00:03:22.616 13:03:32 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:22.616 --rc lcov_branch_coverage=1 00:03:22.616 --rc lcov_function_coverage=1 00:03:22.616 --rc genhtml_branch_coverage=1 00:03:22.616 --rc genhtml_function_coverage=1 00:03:22.616 --rc genhtml_legend=1 00:03:22.616 --rc geninfo_all_blocks=1 00:03:22.616 ' 00:03:22.616 13:03:32 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:22.616 --rc lcov_branch_coverage=1 00:03:22.616 --rc lcov_function_coverage=1 00:03:22.616 --rc genhtml_branch_coverage=1 00:03:22.616 --rc genhtml_function_coverage=1 00:03:22.616 --rc genhtml_legend=1 00:03:22.616 --rc geninfo_all_blocks=1 00:03:22.616 --no-external' 00:03:22.616 13:03:32 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:22.616 --rc lcov_branch_coverage=1 00:03:22.616 --rc lcov_function_coverage=1 00:03:22.616 --rc genhtml_branch_coverage=1 00:03:22.616 --rc genhtml_function_coverage=1 00:03:22.616 --rc genhtml_legend=1 00:03:22.616 --rc geninfo_all_blocks=1 00:03:22.616 --no-external' 00:03:22.616 13:03:32 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:22.616 lcov: LCOV version 1.14 00:03:22.616 13:03:33 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:24.522 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:24.522 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:24.523 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:24.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:24.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:24.781 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:24.782 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:24.782 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:24.782 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:24.782 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:24.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:25.041 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:25.041 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:39.920 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:39.920 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:58.001 13:04:05 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:58.001 13:04:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:58.001 13:04:05 -- common/autotest_common.sh@10 -- # set +x 00:03:58.001 13:04:05 -- spdk/autotest.sh@91 -- # rm -f 00:03:58.001 13:04:05 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.905 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:59.905 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:00.227 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:00.227 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:04:00.227 13:04:10 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:00.227 13:04:10 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:00.227 13:04:10 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:00.227 13:04:10 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:00.227 13:04:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:00.227 13:04:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:00.227 13:04:10 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:00.227 13:04:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.227 13:04:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:00.227 13:04:10 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:00.227 13:04:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.227 13:04:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:00.227 13:04:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:00.227 13:04:10 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:00.227 13:04:10 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:00.227 No valid GPT data, bailing 00:04:00.227 13:04:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:00.227 13:04:10 -- scripts/common.sh@391 -- # pt= 00:04:00.227 13:04:10 -- scripts/common.sh@392 -- # return 1 00:04:00.227 13:04:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:00.227 1+0 records in 00:04:00.227 1+0 records out 00:04:00.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517995 s, 202 MB/s 00:04:00.227 13:04:10 -- spdk/autotest.sh@118 -- # sync 00:04:00.227 13:04:10 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:00.227 13:04:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:00.227 13:04:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:08.340 13:04:17 -- spdk/autotest.sh@124 -- # uname -s 00:04:08.340 13:04:17 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:08.340 13:04:17 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:04:08.340 13:04:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.340 13:04:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.340 13:04:17 -- common/autotest_common.sh@10 -- # set +x 00:04:08.340 ************************************ 00:04:08.340 START TEST setup.sh 00:04:08.340 ************************************ 00:04:08.340 13:04:17 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:04:08.340 * Looking for test storage... 00:04:08.340 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:08.340 13:04:17 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:08.340 13:04:17 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:08.340 13:04:17 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:04:08.340 13:04:17 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.340 13:04:17 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.340 13:04:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.340 ************************************ 00:04:08.340 START TEST acl 00:04:08.340 ************************************ 00:04:08.340 13:04:17 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:04:08.340 * Looking for test storage... 00:04:08.340 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:08.340 13:04:17 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:08.340 13:04:17 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:08.340 13:04:17 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:08.340 13:04:17 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:08.340 13:04:17 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.340 13:04:17 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:08.340 13:04:17 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:08.340 13:04:17 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.340 13:04:17 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.340 13:04:17 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:08.340 13:04:17 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:08.340 13:04:17 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:08.340 13:04:17 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:08.340 13:04:17 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:08.340 13:04:17 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.340 13:04:17 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.525 13:04:22 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:12.525 13:04:22 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:12.525 13:04:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.525 13:04:22 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:12.525 13:04:22 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.525 13:04:22 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:16.712 Hugepages 00:04:16.712 node hugesize free / total 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 00:04:16.712 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:16.712 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:16.713 13:04:26 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:16.713 13:04:26 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.713 13:04:26 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.713 13:04:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:16.713 ************************************ 00:04:16.713 START TEST denied 00:04:16.713 ************************************ 00:04:16.713 13:04:26 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:16.713 13:04:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:04:16.713 13:04:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:16.713 13:04:26 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:04:16.713 13:04:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.713 13:04:26 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:20.897 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:04:20.897 13:04:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:04:20.897 13:04:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:20.897 13:04:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:20.897 13:04:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:04:20.897 13:04:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:04:20.897 13:04:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:20.897 13:04:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:20.897 13:04:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:20.897 13:04:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.897 13:04:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.170 00:04:26.170 real 0m9.539s 00:04:26.170 user 0m2.858s 00:04:26.170 sys 0m5.894s 00:04:26.170 13:04:36 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.170 13:04:36 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:26.170 ************************************ 00:04:26.170 END TEST denied 00:04:26.170 ************************************ 00:04:26.170 13:04:36 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:26.170 13:04:36 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.170 13:04:36 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.170 13:04:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:26.170 ************************************ 00:04:26.170 START TEST allowed 00:04:26.170 ************************************ 00:04:26.170 13:04:36 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:26.170 13:04:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:04:26.170 13:04:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:26.170 13:04:36 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:04:26.170 13:04:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.170 13:04:36 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:32.776 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:32.776 13:04:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:32.776 13:04:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:32.776 13:04:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:32.776 13:04:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.776 13:04:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.967 00:04:36.967 real 0m10.683s 00:04:36.967 user 0m3.064s 00:04:36.967 sys 0m5.833s 00:04:36.967 13:04:47 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.967 13:04:47 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:36.967 ************************************ 00:04:36.967 END TEST allowed 00:04:36.967 ************************************ 00:04:36.967 00:04:36.967 real 0m29.294s 00:04:36.967 user 0m9.092s 00:04:36.967 sys 0m17.931s 00:04:36.967 13:04:47 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.967 13:04:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:36.967 ************************************ 00:04:36.967 END TEST acl 00:04:36.967 ************************************ 00:04:36.967 13:04:47 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:04:36.967 13:04:47 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.967 13:04:47 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.967 13:04:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.967 ************************************ 00:04:36.967 START TEST hugepages 00:04:36.967 ************************************ 00:04:36.967 13:04:47 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:04:36.967 * Looking for test storage... 00:04:36.967 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.967 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41311764 kB' 'MemAvailable: 45299796 kB' 'Buffers: 6064 kB' 'Cached: 10576740 kB' 'SwapCached: 0 kB' 'Active: 7403732 kB' 'Inactive: 3689560 kB' 'Active(anon): 7005308 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513360 kB' 'Mapped: 206516 kB' 'Shmem: 6494820 kB' 'KReclaimable: 547756 kB' 'Slab: 1197356 kB' 'SReclaimable: 547756 kB' 'SUnreclaim: 649600 kB' 'KernelStack: 22400 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 8495216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218892 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.968 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:36.969 13:04:47 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:36.969 13:04:47 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.969 13:04:47 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.969 13:04:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.969 ************************************ 00:04:36.969 START TEST default_setup 00:04:36.969 ************************************ 00:04:36.969 13:04:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:36.969 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:36.969 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:36.969 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:36.969 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:36.969 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:36.969 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:36.969 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.969 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.970 13:04:47 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:41.166 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:41.166 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:43.078 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43474068 kB' 'MemAvailable: 47461428 kB' 'Buffers: 6064 kB' 'Cached: 10576896 kB' 'SwapCached: 0 kB' 'Active: 7421616 kB' 'Inactive: 3689560 kB' 'Active(anon): 7023192 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531056 kB' 'Mapped: 206716 kB' 'Shmem: 6494976 kB' 'KReclaimable: 547084 kB' 'Slab: 1195820 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 648736 kB' 'KernelStack: 22240 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8495876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43475872 kB' 'MemAvailable: 47463232 kB' 'Buffers: 6064 kB' 'Cached: 10576896 kB' 'SwapCached: 0 kB' 'Active: 7420496 kB' 'Inactive: 3689560 kB' 'Active(anon): 7022072 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530320 kB' 'Mapped: 206636 kB' 'Shmem: 6494976 kB' 'KReclaimable: 547084 kB' 'Slab: 1195812 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 648728 kB' 'KernelStack: 22272 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8495892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.080 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43477676 kB' 'MemAvailable: 47465036 kB' 'Buffers: 6064 kB' 'Cached: 10576928 kB' 'SwapCached: 0 kB' 'Active: 7420780 kB' 'Inactive: 3689560 kB' 'Active(anon): 7022356 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530516 kB' 'Mapped: 206636 kB' 'Shmem: 6495008 kB' 'KReclaimable: 547084 kB' 'Slab: 1195812 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 648728 kB' 'KernelStack: 22256 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8495916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218908 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.347 nr_hugepages=1024 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.347 resv_hugepages=0 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.347 surplus_hugepages=0 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.347 anon_hugepages=0 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43476320 kB' 'MemAvailable: 47463680 kB' 'Buffers: 6064 kB' 'Cached: 10576932 kB' 'SwapCached: 0 kB' 'Active: 7420152 kB' 'Inactive: 3689560 kB' 'Active(anon): 7021728 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529916 kB' 'Mapped: 206636 kB' 'Shmem: 6495012 kB' 'KReclaimable: 547084 kB' 'Slab: 1195876 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 648792 kB' 'KernelStack: 22192 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8495688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218892 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.348 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26382448 kB' 'MemUsed: 6256692 kB' 'SwapCached: 0 kB' 'Active: 2303700 kB' 'Inactive: 231284 kB' 'Active(anon): 2170652 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2162940 kB' 'Mapped: 87720 kB' 'AnonPages: 375192 kB' 'Shmem: 1798608 kB' 'KernelStack: 12680 kB' 'PageTables: 5444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 212480 kB' 'Slab: 514788 kB' 'SReclaimable: 212480 kB' 'SUnreclaim: 302308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.351 node0=1024 expecting 1024 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.351 00:04:43.351 real 0m6.404s 00:04:43.351 user 0m1.787s 00:04:43.351 sys 0m2.895s 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.351 13:04:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:43.351 ************************************ 00:04:43.351 END TEST default_setup 00:04:43.351 ************************************ 00:04:43.351 13:04:53 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:43.351 13:04:53 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.351 13:04:53 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.351 13:04:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.351 ************************************ 00:04:43.351 START TEST per_node_1G_alloc 00:04:43.351 ************************************ 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.351 13:04:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:47.555 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:47.555 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43470448 kB' 'MemAvailable: 47457808 kB' 'Buffers: 6064 kB' 'Cached: 10577040 kB' 'SwapCached: 0 kB' 'Active: 7420876 kB' 'Inactive: 3689560 kB' 'Active(anon): 7022452 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530620 kB' 'Mapped: 205492 kB' 'Shmem: 6495120 kB' 'KReclaimable: 547084 kB' 'Slab: 1196252 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649168 kB' 'KernelStack: 22096 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8503608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.555 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.556 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43472044 kB' 'MemAvailable: 47459404 kB' 'Buffers: 6064 kB' 'Cached: 10577044 kB' 'SwapCached: 0 kB' 'Active: 7418680 kB' 'Inactive: 3689560 kB' 'Active(anon): 7020256 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528364 kB' 'Mapped: 205460 kB' 'Shmem: 6495124 kB' 'KReclaimable: 547084 kB' 'Slab: 1196256 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649172 kB' 'KernelStack: 22064 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8486772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.557 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.558 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43471980 kB' 'MemAvailable: 47459340 kB' 'Buffers: 6064 kB' 'Cached: 10577064 kB' 'SwapCached: 0 kB' 'Active: 7418704 kB' 'Inactive: 3689560 kB' 'Active(anon): 7020280 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528368 kB' 'Mapped: 205460 kB' 'Shmem: 6495144 kB' 'KReclaimable: 547084 kB' 'Slab: 1196248 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649164 kB' 'KernelStack: 22064 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8486800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.559 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.560 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.561 nr_hugepages=1024 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.561 resv_hugepages=0 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.561 surplus_hugepages=0 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.561 anon_hugepages=0 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.561 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43471988 kB' 'MemAvailable: 47459348 kB' 'Buffers: 6064 kB' 'Cached: 10577104 kB' 'SwapCached: 0 kB' 'Active: 7418396 kB' 'Inactive: 3689560 kB' 'Active(anon): 7019972 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527968 kB' 'Mapped: 205460 kB' 'Shmem: 6495184 kB' 'KReclaimable: 547084 kB' 'Slab: 1196248 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649164 kB' 'KernelStack: 22048 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8486956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.562 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.563 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27433484 kB' 'MemUsed: 5205656 kB' 'SwapCached: 0 kB' 'Active: 2303816 kB' 'Inactive: 231284 kB' 'Active(anon): 2170768 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2162984 kB' 'Mapped: 86936 kB' 'AnonPages: 375192 kB' 'Shmem: 1798652 kB' 'KernelStack: 12472 kB' 'PageTables: 4928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 212480 kB' 'Slab: 514876 kB' 'SReclaimable: 212480 kB' 'SUnreclaim: 302396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.564 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.564 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.564 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.564 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16039132 kB' 'MemUsed: 11616948 kB' 'SwapCached: 0 kB' 'Active: 5115360 kB' 'Inactive: 3458276 kB' 'Active(anon): 4849984 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8420212 kB' 'Mapped: 118524 kB' 'AnonPages: 153536 kB' 'Shmem: 4696560 kB' 'KernelStack: 9576 kB' 'PageTables: 3376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334604 kB' 'Slab: 681372 kB' 'SReclaimable: 334604 kB' 'SUnreclaim: 346768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.565 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.566 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:47.567 node0=512 expecting 512 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:47.567 node1=512 expecting 512 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:47.567 00:04:47.567 real 0m4.220s 00:04:47.567 user 0m1.592s 00:04:47.567 sys 0m2.707s 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.567 13:04:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.567 ************************************ 00:04:47.567 END TEST per_node_1G_alloc 00:04:47.567 ************************************ 00:04:47.827 13:04:58 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:47.827 13:04:58 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.827 13:04:58 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.827 13:04:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.827 ************************************ 00:04:47.827 START TEST even_2G_alloc 00:04:47.827 ************************************ 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.827 13:04:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:52.028 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:52.028 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.028 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43505420 kB' 'MemAvailable: 47492780 kB' 'Buffers: 6064 kB' 'Cached: 10577220 kB' 'SwapCached: 0 kB' 'Active: 7417772 kB' 'Inactive: 3689560 kB' 'Active(anon): 7019348 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527252 kB' 'Mapped: 205472 kB' 'Shmem: 6495300 kB' 'KReclaimable: 547084 kB' 'Slab: 1195668 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 648584 kB' 'KernelStack: 22128 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8488104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.029 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43506088 kB' 'MemAvailable: 47493448 kB' 'Buffers: 6064 kB' 'Cached: 10577224 kB' 'SwapCached: 0 kB' 'Active: 7417368 kB' 'Inactive: 3689560 kB' 'Active(anon): 7018944 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526912 kB' 'Mapped: 205472 kB' 'Shmem: 6495304 kB' 'KReclaimable: 547084 kB' 'Slab: 1195704 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 648620 kB' 'KernelStack: 22112 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8488120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.030 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43506356 kB' 'MemAvailable: 47493716 kB' 'Buffers: 6064 kB' 'Cached: 10577240 kB' 'SwapCached: 0 kB' 'Active: 7417368 kB' 'Inactive: 3689560 kB' 'Active(anon): 7018944 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526908 kB' 'Mapped: 205472 kB' 'Shmem: 6495320 kB' 'KReclaimable: 547084 kB' 'Slab: 1195704 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 648620 kB' 'KernelStack: 22112 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8488144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.031 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.032 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.033 nr_hugepages=1024 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.033 resv_hugepages=0 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.033 surplus_hugepages=0 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.033 anon_hugepages=0 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43506784 kB' 'MemAvailable: 47494144 kB' 'Buffers: 6064 kB' 'Cached: 10577260 kB' 'SwapCached: 0 kB' 'Active: 7417392 kB' 'Inactive: 3689560 kB' 'Active(anon): 7018968 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526904 kB' 'Mapped: 205472 kB' 'Shmem: 6495340 kB' 'KReclaimable: 547084 kB' 'Slab: 1195704 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 648620 kB' 'KernelStack: 22112 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8488164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.033 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27458476 kB' 'MemUsed: 5180664 kB' 'SwapCached: 0 kB' 'Active: 2304044 kB' 'Inactive: 231284 kB' 'Active(anon): 2170996 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2163108 kB' 'Mapped: 86948 kB' 'AnonPages: 375448 kB' 'Shmem: 1798776 kB' 'KernelStack: 12552 kB' 'PageTables: 5236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 212480 kB' 'Slab: 514440 kB' 'SReclaimable: 212480 kB' 'SUnreclaim: 301960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.034 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16048560 kB' 'MemUsed: 11607520 kB' 'SwapCached: 0 kB' 'Active: 5113380 kB' 'Inactive: 3458276 kB' 'Active(anon): 4848004 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8420240 kB' 'Mapped: 118524 kB' 'AnonPages: 151456 kB' 'Shmem: 4696588 kB' 'KernelStack: 9560 kB' 'PageTables: 3324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334604 kB' 'Slab: 681264 kB' 'SReclaimable: 334604 kB' 'SUnreclaim: 346660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.035 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:52.036 node0=512 expecting 512 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:52.036 node1=512 expecting 512 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:52.036 00:04:52.036 real 0m4.266s 00:04:52.036 user 0m1.559s 00:04:52.036 sys 0m2.714s 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.036 13:05:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.036 ************************************ 00:04:52.036 END TEST even_2G_alloc 00:04:52.036 ************************************ 00:04:52.036 13:05:02 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:52.036 13:05:02 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.036 13:05:02 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.036 13:05:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.036 ************************************ 00:04:52.036 START TEST odd_alloc 00:04:52.036 ************************************ 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:52.036 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:52.037 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:52.037 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.037 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:52.037 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:52.037 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:52.037 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.037 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:52.037 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:52.037 13:05:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:52.037 13:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.037 13:05:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:56.261 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:56.261 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.261 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43482688 kB' 'MemAvailable: 47470048 kB' 'Buffers: 6064 kB' 'Cached: 10577396 kB' 'SwapCached: 0 kB' 'Active: 7418620 kB' 'Inactive: 3689560 kB' 'Active(anon): 7020196 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527992 kB' 'Mapped: 205588 kB' 'Shmem: 6495476 kB' 'KReclaimable: 547084 kB' 'Slab: 1196108 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649024 kB' 'KernelStack: 22176 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8491912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218940 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.262 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43483544 kB' 'MemAvailable: 47470904 kB' 'Buffers: 6064 kB' 'Cached: 10577396 kB' 'SwapCached: 0 kB' 'Active: 7418448 kB' 'Inactive: 3689560 kB' 'Active(anon): 7020024 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527856 kB' 'Mapped: 205512 kB' 'Shmem: 6495476 kB' 'KReclaimable: 547084 kB' 'Slab: 1196200 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649116 kB' 'KernelStack: 22064 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8489064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218924 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.263 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.264 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43484048 kB' 'MemAvailable: 47471408 kB' 'Buffers: 6064 kB' 'Cached: 10577416 kB' 'SwapCached: 0 kB' 'Active: 7418204 kB' 'Inactive: 3689560 kB' 'Active(anon): 7019780 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527632 kB' 'Mapped: 205492 kB' 'Shmem: 6495496 kB' 'KReclaimable: 547084 kB' 'Slab: 1196200 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649116 kB' 'KernelStack: 22112 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8489084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218876 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.265 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.266 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:56.267 nr_hugepages=1025 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.267 resv_hugepages=0 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.267 surplus_hugepages=0 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.267 anon_hugepages=0 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43484048 kB' 'MemAvailable: 47471408 kB' 'Buffers: 6064 kB' 'Cached: 10577436 kB' 'SwapCached: 0 kB' 'Active: 7418232 kB' 'Inactive: 3689560 kB' 'Active(anon): 7019808 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527632 kB' 'Mapped: 205492 kB' 'Shmem: 6495516 kB' 'KReclaimable: 547084 kB' 'Slab: 1196200 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649116 kB' 'KernelStack: 22112 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8489108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218876 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.267 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.268 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27451392 kB' 'MemUsed: 5187748 kB' 'SwapCached: 0 kB' 'Active: 2304776 kB' 'Inactive: 231284 kB' 'Active(anon): 2171728 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2163276 kB' 'Mapped: 86968 kB' 'AnonPages: 375980 kB' 'Shmem: 1798944 kB' 'KernelStack: 12520 kB' 'PageTables: 5140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 212480 kB' 'Slab: 515012 kB' 'SReclaimable: 212480 kB' 'SUnreclaim: 302532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.269 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16034132 kB' 'MemUsed: 11621948 kB' 'SwapCached: 0 kB' 'Active: 5113424 kB' 'Inactive: 3458276 kB' 'Active(anon): 4848048 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8420244 kB' 'Mapped: 118524 kB' 'AnonPages: 151648 kB' 'Shmem: 4696592 kB' 'KernelStack: 9592 kB' 'PageTables: 3412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334604 kB' 'Slab: 681188 kB' 'SReclaimable: 334604 kB' 'SUnreclaim: 346584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.270 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.271 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:56.272 node0=512 expecting 513 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:56.272 node1=513 expecting 512 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:56.272 00:04:56.272 real 0m4.240s 00:04:56.272 user 0m1.540s 00:04:56.272 sys 0m2.759s 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.272 13:05:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:56.272 ************************************ 00:04:56.272 END TEST odd_alloc 00:04:56.272 ************************************ 00:04:56.272 13:05:06 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:56.272 13:05:06 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.272 13:05:06 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.272 13:05:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:56.531 ************************************ 00:04:56.531 START TEST custom_alloc 00:04:56.531 ************************************ 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.531 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.532 13:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:59.823 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:59.823 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:00.086 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42459792 kB' 'MemAvailable: 46447152 kB' 'Buffers: 6064 kB' 'Cached: 10577548 kB' 'SwapCached: 0 kB' 'Active: 7420676 kB' 'Inactive: 3689560 kB' 'Active(anon): 7022252 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529468 kB' 'Mapped: 205600 kB' 'Shmem: 6495628 kB' 'KReclaimable: 547084 kB' 'Slab: 1195800 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 648716 kB' 'KernelStack: 22160 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8489876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.086 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.087 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42460044 kB' 'MemAvailable: 46447404 kB' 'Buffers: 6064 kB' 'Cached: 10577552 kB' 'SwapCached: 0 kB' 'Active: 7419680 kB' 'Inactive: 3689560 kB' 'Active(anon): 7021256 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528968 kB' 'Mapped: 205500 kB' 'Shmem: 6495632 kB' 'KReclaimable: 547084 kB' 'Slab: 1195800 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 648716 kB' 'KernelStack: 22160 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8489892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.088 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.089 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42460464 kB' 'MemAvailable: 46447824 kB' 'Buffers: 6064 kB' 'Cached: 10577572 kB' 'SwapCached: 0 kB' 'Active: 7419888 kB' 'Inactive: 3689560 kB' 'Active(anon): 7021464 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529100 kB' 'Mapped: 205500 kB' 'Shmem: 6495652 kB' 'KReclaimable: 547084 kB' 'Slab: 1195800 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 648716 kB' 'KernelStack: 22160 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8489916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.090 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.091 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:00.354 nr_hugepages=1536 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.354 resv_hugepages=0 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.354 surplus_hugepages=0 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.354 anon_hugepages=0 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42459960 kB' 'MemAvailable: 46447320 kB' 'Buffers: 6064 kB' 'Cached: 10577572 kB' 'SwapCached: 0 kB' 'Active: 7420424 kB' 'Inactive: 3689560 kB' 'Active(anon): 7022000 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529688 kB' 'Mapped: 206004 kB' 'Shmem: 6495652 kB' 'KReclaimable: 547084 kB' 'Slab: 1195800 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 648716 kB' 'KernelStack: 22176 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8491292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218796 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.354 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.355 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27459572 kB' 'MemUsed: 5179568 kB' 'SwapCached: 0 kB' 'Active: 2303788 kB' 'Inactive: 231284 kB' 'Active(anon): 2170740 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2163428 kB' 'Mapped: 87480 kB' 'AnonPages: 374816 kB' 'Shmem: 1799096 kB' 'KernelStack: 12568 kB' 'PageTables: 5200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 212480 kB' 'Slab: 514596 kB' 'SReclaimable: 212480 kB' 'SUnreclaim: 302116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.356 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.357 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 14993188 kB' 'MemUsed: 12662892 kB' 'SwapCached: 0 kB' 'Active: 5121376 kB' 'Inactive: 3458276 kB' 'Active(anon): 4856000 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8420268 kB' 'Mapped: 118524 kB' 'AnonPages: 159524 kB' 'Shmem: 4696616 kB' 'KernelStack: 9576 kB' 'PageTables: 3396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334604 kB' 'Slab: 681204 kB' 'SReclaimable: 334604 kB' 'SUnreclaim: 346600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.358 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:00.359 node0=512 expecting 512 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.359 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:00.360 node1=1024 expecting 1024 00:05:00.360 13:05:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:00.360 00:05:00.360 real 0m3.889s 00:05:00.360 user 0m1.285s 00:05:00.360 sys 0m2.628s 00:05:00.360 13:05:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.360 13:05:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:00.360 ************************************ 00:05:00.360 END TEST custom_alloc 00:05:00.360 ************************************ 00:05:00.360 13:05:10 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:00.360 13:05:10 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.360 13:05:10 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.360 13:05:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.360 ************************************ 00:05:00.360 START TEST no_shrink_alloc 00:05:00.360 ************************************ 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.360 13:05:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:04.556 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:04.556 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:04.556 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:04.556 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.556 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.556 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.556 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.556 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.556 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.556 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43475956 kB' 'MemAvailable: 47463316 kB' 'Buffers: 6064 kB' 'Cached: 10577728 kB' 'SwapCached: 0 kB' 'Active: 7421036 kB' 'Inactive: 3689560 kB' 'Active(anon): 7022612 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529640 kB' 'Mapped: 205588 kB' 'Shmem: 6495808 kB' 'KReclaimable: 547084 kB' 'Slab: 1196816 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649732 kB' 'KernelStack: 22144 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8490820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.557 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43476408 kB' 'MemAvailable: 47463768 kB' 'Buffers: 6064 kB' 'Cached: 10577728 kB' 'SwapCached: 0 kB' 'Active: 7421044 kB' 'Inactive: 3689560 kB' 'Active(anon): 7022620 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529624 kB' 'Mapped: 205588 kB' 'Shmem: 6495808 kB' 'KReclaimable: 547084 kB' 'Slab: 1196816 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649732 kB' 'KernelStack: 22112 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8490840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218796 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.558 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.559 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43475872 kB' 'MemAvailable: 47463232 kB' 'Buffers: 6064 kB' 'Cached: 10577748 kB' 'SwapCached: 0 kB' 'Active: 7421040 kB' 'Inactive: 3689560 kB' 'Active(anon): 7022616 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530088 kB' 'Mapped: 205512 kB' 'Shmem: 6495828 kB' 'KReclaimable: 547084 kB' 'Slab: 1196844 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649760 kB' 'KernelStack: 22160 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8490860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218796 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.560 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.561 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:04.562 nr_hugepages=1024 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.562 resv_hugepages=0 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.562 surplus_hugepages=0 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.562 anon_hugepages=0 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.562 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43476124 kB' 'MemAvailable: 47463484 kB' 'Buffers: 6064 kB' 'Cached: 10577772 kB' 'SwapCached: 0 kB' 'Active: 7420788 kB' 'Inactive: 3689560 kB' 'Active(anon): 7022364 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529304 kB' 'Mapped: 205512 kB' 'Shmem: 6495852 kB' 'KReclaimable: 547084 kB' 'Slab: 1196844 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649760 kB' 'KernelStack: 22128 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8490884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218796 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.563 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.564 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26395472 kB' 'MemUsed: 6243668 kB' 'SwapCached: 0 kB' 'Active: 2304236 kB' 'Inactive: 231284 kB' 'Active(anon): 2171188 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2163564 kB' 'Mapped: 86988 kB' 'AnonPages: 375128 kB' 'Shmem: 1799232 kB' 'KernelStack: 12568 kB' 'PageTables: 5228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 212480 kB' 'Slab: 515016 kB' 'SReclaimable: 212480 kB' 'SUnreclaim: 302536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.565 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:04.566 node0=1024 expecting 1024 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.566 13:05:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:07.857 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:07.857 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:07.857 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:07.857 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:07.857 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:08.120 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:08.120 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:08.120 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:08.120 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:08.120 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:08.120 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:08.120 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:08.120 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:08.120 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:08.120 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:08.120 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:08.120 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:08.120 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:08.120 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:08.120 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:08.120 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.120 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.120 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:08.120 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:08.120 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43488628 kB' 'MemAvailable: 47475988 kB' 'Buffers: 6064 kB' 'Cached: 10577868 kB' 'SwapCached: 0 kB' 'Active: 7421324 kB' 'Inactive: 3689560 kB' 'Active(anon): 7022900 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530208 kB' 'Mapped: 205524 kB' 'Shmem: 6495948 kB' 'KReclaimable: 547084 kB' 'Slab: 1196756 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649672 kB' 'KernelStack: 22144 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8491636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.121 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43488908 kB' 'MemAvailable: 47476268 kB' 'Buffers: 6064 kB' 'Cached: 10577872 kB' 'SwapCached: 0 kB' 'Active: 7421448 kB' 'Inactive: 3689560 kB' 'Active(anon): 7023024 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530344 kB' 'Mapped: 205516 kB' 'Shmem: 6495952 kB' 'KReclaimable: 547084 kB' 'Slab: 1196732 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649648 kB' 'KernelStack: 22112 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8491656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.122 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.123 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43489160 kB' 'MemAvailable: 47476520 kB' 'Buffers: 6064 kB' 'Cached: 10577888 kB' 'SwapCached: 0 kB' 'Active: 7421428 kB' 'Inactive: 3689560 kB' 'Active(anon): 7023004 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530356 kB' 'Mapped: 205516 kB' 'Shmem: 6495968 kB' 'KReclaimable: 547084 kB' 'Slab: 1196872 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649788 kB' 'KernelStack: 22128 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8491676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.124 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.125 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:08.126 nr_hugepages=1024 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.126 resv_hugepages=0 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.126 surplus_hugepages=0 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.126 anon_hugepages=0 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43490372 kB' 'MemAvailable: 47477732 kB' 'Buffers: 6064 kB' 'Cached: 10577912 kB' 'SwapCached: 0 kB' 'Active: 7421624 kB' 'Inactive: 3689560 kB' 'Active(anon): 7023200 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530532 kB' 'Mapped: 205516 kB' 'Shmem: 6495992 kB' 'KReclaimable: 547084 kB' 'Slab: 1196872 kB' 'SReclaimable: 547084 kB' 'SUnreclaim: 649788 kB' 'KernelStack: 22112 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8492948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3452276 kB' 'DirectMap2M: 19302400 kB' 'DirectMap1G: 46137344 kB' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.126 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.127 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.389 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26404940 kB' 'MemUsed: 6234200 kB' 'SwapCached: 0 kB' 'Active: 2304136 kB' 'Inactive: 231284 kB' 'Active(anon): 2171088 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2163660 kB' 'Mapped: 86992 kB' 'AnonPages: 374908 kB' 'Shmem: 1799328 kB' 'KernelStack: 12520 kB' 'PageTables: 5092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 212480 kB' 'Slab: 514964 kB' 'SReclaimable: 212480 kB' 'SUnreclaim: 302484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.390 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.391 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:08.391 node0=1024 expecting 1024 00:05:08.392 13:05:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:08.392 00:05:08.392 real 0m7.921s 00:05:08.392 user 0m2.909s 00:05:08.392 sys 0m5.053s 00:05:08.392 13:05:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.392 13:05:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:08.392 ************************************ 00:05:08.392 END TEST no_shrink_alloc 00:05:08.392 ************************************ 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:08.392 13:05:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:08.392 00:05:08.392 real 0m31.537s 00:05:08.392 user 0m10.883s 00:05:08.392 sys 0m19.183s 00:05:08.392 13:05:18 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.392 13:05:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:08.392 ************************************ 00:05:08.392 END TEST hugepages 00:05:08.392 ************************************ 00:05:08.392 13:05:18 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:05:08.392 13:05:18 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.392 13:05:18 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.392 13:05:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:08.392 ************************************ 00:05:08.392 START TEST driver 00:05:08.392 ************************************ 00:05:08.392 13:05:18 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:05:08.651 * Looking for test storage... 00:05:08.651 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:05:08.651 13:05:18 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:08.651 13:05:18 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.651 13:05:18 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:13.929 13:05:24 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:13.929 13:05:24 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.929 13:05:24 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.929 13:05:24 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:13.929 ************************************ 00:05:13.929 START TEST guess_driver 00:05:13.929 ************************************ 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:13.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:13.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:13.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:13.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:13.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:13.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:13.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:13.929 Looking for driver=vfio-pci 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.929 13:05:24 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:18.125 13:05:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.033 13:05:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.033 13:05:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:20.033 13:05:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.033 13:05:30 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:20.033 13:05:30 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:20.033 13:05:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.033 13:05:30 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:26.599 00:05:26.599 real 0m11.831s 00:05:26.599 user 0m3.017s 00:05:26.599 sys 0m6.114s 00:05:26.599 13:05:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.599 13:05:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:26.599 ************************************ 00:05:26.599 END TEST guess_driver 00:05:26.599 ************************************ 00:05:26.599 00:05:26.599 real 0m17.216s 00:05:26.599 user 0m4.368s 00:05:26.599 sys 0m9.250s 00:05:26.599 13:05:36 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.599 13:05:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:26.599 ************************************ 00:05:26.599 END TEST driver 00:05:26.599 ************************************ 00:05:26.599 13:05:36 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:05:26.599 13:05:36 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.599 13:05:36 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.599 13:05:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:26.599 ************************************ 00:05:26.599 START TEST devices 00:05:26.599 ************************************ 00:05:26.599 13:05:36 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:05:26.599 * Looking for test storage... 00:05:26.599 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:05:26.599 13:05:36 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:26.599 13:05:36 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:26.599 13:05:36 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:26.600 13:05:36 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:30.790 13:05:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:30.790 13:05:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:30.790 13:05:40 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:30.790 13:05:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:30.790 13:05:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:30.790 13:05:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:30.790 13:05:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.790 13:05:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:30.790 13:05:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:30.790 13:05:40 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:30.790 No valid GPT data, bailing 00:05:30.790 13:05:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.790 13:05:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:30.790 13:05:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:30.790 13:05:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:30.790 13:05:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:30.790 13:05:40 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:30.790 13:05:40 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:30.790 13:05:40 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.790 13:05:40 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.790 13:05:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:30.790 ************************************ 00:05:30.790 START TEST nvme_mount 00:05:30.790 ************************************ 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:30.790 13:05:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:31.358 Creating new GPT entries in memory. 00:05:31.358 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:31.358 other utilities. 00:05:31.358 13:05:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:31.358 13:05:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.358 13:05:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:31.358 13:05:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:31.358 13:05:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:32.296 Creating new GPT entries in memory. 00:05:32.296 The operation has completed successfully. 00:05:32.296 13:05:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:32.296 13:05:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.296 13:05:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 750776 00:05:32.296 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.296 13:05:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:32.296 13:05:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.296 13:05:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:32.296 13:05:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:32.296 13:05:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.554 13:05:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:36.745 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:36.745 13:05:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:36.745 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:36.745 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:36.745 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:36.745 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:36.745 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:36.745 13:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:36.745 13:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.003 13:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:37.003 13:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:37.003 13:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.004 13:05:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:40.315 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.574 13:05:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:44.767 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:44.767 00:05:44.767 real 0m14.351s 00:05:44.767 user 0m4.178s 00:05:44.767 sys 0m8.082s 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.767 13:05:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:44.767 ************************************ 00:05:44.767 END TEST nvme_mount 00:05:44.767 ************************************ 00:05:44.767 13:05:55 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:44.767 13:05:55 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.767 13:05:55 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.767 13:05:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:44.767 ************************************ 00:05:44.767 START TEST dm_mount 00:05:44.767 ************************************ 00:05:44.767 13:05:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:44.767 13:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:44.767 13:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:44.767 13:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:44.767 13:05:55 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:44.767 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:44.767 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:44.767 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:44.767 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:44.767 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:44.767 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:44.768 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:44.768 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.768 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:44.768 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:44.768 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.768 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:44.768 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:44.768 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.768 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:44.768 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:44.768 13:05:55 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:45.711 Creating new GPT entries in memory. 00:05:45.711 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:45.711 other utilities. 00:05:45.711 13:05:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:45.711 13:05:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:45.711 13:05:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:45.711 13:05:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:45.711 13:05:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:46.649 Creating new GPT entries in memory. 00:05:46.649 The operation has completed successfully. 00:05:46.649 13:05:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:46.649 13:05:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.649 13:05:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:46.649 13:05:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:46.649 13:05:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:48.029 The operation has completed successfully. 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 755954 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.029 13:05:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.220 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.221 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.221 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.221 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.221 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.221 13:06:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.221 13:06:02 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:55.507 13:06:05 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:55.767 13:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.767 13:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:55.767 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:55.767 13:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:55.767 13:06:06 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:55.767 00:05:55.767 real 0m10.940s 00:05:55.767 user 0m2.733s 00:05:55.767 sys 0m5.256s 00:05:55.768 13:06:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.768 13:06:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:55.768 ************************************ 00:05:55.768 END TEST dm_mount 00:05:55.768 ************************************ 00:05:55.768 13:06:06 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:55.768 13:06:06 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:55.768 13:06:06 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:55.768 13:06:06 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.768 13:06:06 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:55.768 13:06:06 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:55.768 13:06:06 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:56.027 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:56.027 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:56.027 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:56.027 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:56.027 13:06:06 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:56.027 13:06:06 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:56.027 13:06:06 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:56.027 13:06:06 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:56.027 13:06:06 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:56.027 13:06:06 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:56.027 13:06:06 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:56.027 00:05:56.027 real 0m30.280s 00:05:56.027 user 0m8.517s 00:05:56.027 sys 0m16.595s 00:05:56.027 13:06:06 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.027 13:06:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:56.027 ************************************ 00:05:56.027 END TEST devices 00:05:56.027 ************************************ 00:05:56.027 00:05:56.027 real 1m48.772s 00:05:56.027 user 0m33.021s 00:05:56.027 sys 1m3.281s 00:05:56.027 13:06:06 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.027 13:06:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:56.027 ************************************ 00:05:56.027 END TEST setup.sh 00:05:56.027 ************************************ 00:05:56.027 13:06:06 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:06:00.220 Hugepages 00:06:00.220 node hugesize free / total 00:06:00.220 node0 1048576kB 0 / 0 00:06:00.220 node0 2048kB 1024 / 1024 00:06:00.220 node1 1048576kB 0 / 0 00:06:00.220 node1 2048kB 1024 / 1024 00:06:00.220 00:06:00.220 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:00.220 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:00.220 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:00.220 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:00.220 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:00.220 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:00.220 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:00.220 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:00.220 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:00.220 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:00.220 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:00.220 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:00.220 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:00.220 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:00.220 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:00.220 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:00.220 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:00.220 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:00.220 13:06:10 -- spdk/autotest.sh@130 -- # uname -s 00:06:00.220 13:06:10 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:00.220 13:06:10 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:00.220 13:06:10 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:04.459 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:04.459 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:05.838 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:05.838 13:06:16 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:06.776 13:06:17 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:06.776 13:06:17 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:06.776 13:06:17 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:06.776 13:06:17 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:06.776 13:06:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:06.776 13:06:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:06.776 13:06:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:06.776 13:06:17 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:06.776 13:06:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:07.035 13:06:17 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:07.035 13:06:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:06:07.035 13:06:17 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:06:11.229 Waiting for block devices as requested 00:06:11.229 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:11.229 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:11.229 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:11.229 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:11.229 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:11.488 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:11.488 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:11.488 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:11.748 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:11.748 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:11.748 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:12.007 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:12.007 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:12.007 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:12.265 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:12.265 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:12.265 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:06:12.524 13:06:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:12.524 13:06:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:06:12.524 13:06:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:12.524 13:06:22 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:06:12.524 13:06:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:06:12.525 13:06:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:06:12.525 13:06:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:06:12.525 13:06:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:12.525 13:06:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:12.525 13:06:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:12.525 13:06:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:12.525 13:06:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:12.525 13:06:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:12.525 13:06:22 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:06:12.525 13:06:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:12.525 13:06:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:12.525 13:06:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:12.525 13:06:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:12.525 13:06:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:12.525 13:06:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:12.525 13:06:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:12.525 13:06:22 -- common/autotest_common.sh@1557 -- # continue 00:06:12.525 13:06:22 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:12.525 13:06:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:12.525 13:06:22 -- common/autotest_common.sh@10 -- # set +x 00:06:12.525 13:06:22 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:12.525 13:06:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:12.525 13:06:22 -- common/autotest_common.sh@10 -- # set +x 00:06:12.525 13:06:22 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:16.719 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:16.719 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:18.625 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:18.625 13:06:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:18.625 13:06:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:18.625 13:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:18.625 13:06:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:18.625 13:06:28 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:18.625 13:06:28 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:18.625 13:06:28 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:18.625 13:06:28 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:18.625 13:06:28 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:18.625 13:06:28 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:18.625 13:06:28 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:18.625 13:06:28 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:18.625 13:06:28 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:18.625 13:06:28 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:18.625 13:06:29 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:18.625 13:06:29 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:06:18.625 13:06:29 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:18.625 13:06:29 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:06:18.625 13:06:29 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:18.625 13:06:29 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:18.625 13:06:29 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:18.625 13:06:29 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:06:18.625 13:06:29 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:06:18.625 13:06:29 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=767784 00:06:18.625 13:06:29 -- common/autotest_common.sh@1598 -- # waitforlisten 767784 00:06:18.625 13:06:29 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.625 13:06:29 -- common/autotest_common.sh@831 -- # '[' -z 767784 ']' 00:06:18.625 13:06:29 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.625 13:06:29 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.626 13:06:29 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.626 13:06:29 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.626 13:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:18.885 [2024-07-25 13:06:29.117672] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:06:18.885 [2024-07-25 13:06:29.117717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767784 ] 00:06:18.885 [2024-07-25 13:06:29.222339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.885 [2024-07-25 13:06:29.304110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.823 13:06:29 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.823 13:06:29 -- common/autotest_common.sh@864 -- # return 0 00:06:19.823 13:06:29 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:19.823 13:06:29 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:19.823 13:06:29 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:06:23.116 nvme0n1 00:06:23.116 13:06:32 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:23.116 [2024-07-25 13:06:33.133578] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:23.116 request: 00:06:23.116 { 00:06:23.116 "nvme_ctrlr_name": "nvme0", 00:06:23.116 "password": "test", 00:06:23.116 "method": "bdev_nvme_opal_revert", 00:06:23.116 "req_id": 1 00:06:23.116 } 00:06:23.116 Got JSON-RPC error response 00:06:23.116 response: 00:06:23.116 { 00:06:23.116 "code": -32602, 00:06:23.116 "message": "Invalid parameters" 00:06:23.116 } 00:06:23.116 13:06:33 -- common/autotest_common.sh@1604 -- # true 00:06:23.116 13:06:33 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:23.116 13:06:33 -- common/autotest_common.sh@1608 -- # killprocess 767784 00:06:23.116 13:06:33 -- common/autotest_common.sh@950 -- # '[' -z 767784 ']' 00:06:23.116 13:06:33 -- common/autotest_common.sh@954 -- # kill -0 767784 00:06:23.116 13:06:33 -- common/autotest_common.sh@955 -- # uname 00:06:23.116 13:06:33 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.116 13:06:33 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 767784 00:06:23.116 13:06:33 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.116 13:06:33 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.116 13:06:33 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 767784' 00:06:23.116 killing process with pid 767784 00:06:23.116 13:06:33 -- common/autotest_common.sh@969 -- # kill 767784 00:06:23.116 13:06:33 -- common/autotest_common.sh@974 -- # wait 767784 00:06:25.669 13:06:35 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:25.669 13:06:35 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:25.669 13:06:35 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:06:25.669 13:06:35 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:06:25.669 13:06:35 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:06:26.236 Restarting all devices. 00:06:32.806 lstat() error: No such file or directory 00:06:32.806 QAT Error: No GENERAL section found 00:06:32.806 Failed to configure qat_dev0 00:06:32.806 lstat() error: No such file or directory 00:06:32.806 QAT Error: No GENERAL section found 00:06:32.806 Failed to configure qat_dev1 00:06:32.806 lstat() error: No such file or directory 00:06:32.806 QAT Error: No GENERAL section found 00:06:32.806 Failed to configure qat_dev2 00:06:32.806 lstat() error: No such file or directory 00:06:32.806 QAT Error: No GENERAL section found 00:06:32.806 Failed to configure qat_dev3 00:06:32.806 lstat() error: No such file or directory 00:06:32.806 QAT Error: No GENERAL section found 00:06:32.806 Failed to configure qat_dev4 00:06:32.806 enable sriov 00:06:32.806 Checking status of all devices. 00:06:32.806 There is 5 QAT acceleration device(s) in the system: 00:06:32.806 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:06:32.806 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:06:32.806 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:06:32.806 qat_dev3 - type: c6xx, inst_id: 3, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:06:32.806 qat_dev4 - type: c6xx, inst_id: 4, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:06:32.806 0000:1a:00.0 set to 16 VFs 00:06:33.749 0000:1c:00.0 set to 16 VFs 00:06:34.686 0000:1e:00.0 set to 16 VFs 00:06:35.255 0000:3d:00.0 set to 16 VFs 00:06:36.194 0000:3f:00.0 set to 16 VFs 00:06:38.732 Properly configured the qat device with driver uio_pci_generic. 00:06:38.732 13:06:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:38.732 13:06:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.732 13:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:38.732 13:06:48 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:38.732 13:06:48 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:06:38.732 13:06:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.732 13:06:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.732 13:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:38.732 ************************************ 00:06:38.732 START TEST env 00:06:38.732 ************************************ 00:06:38.732 13:06:48 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:06:38.732 * Looking for test storage... 00:06:38.732 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:06:38.732 13:06:49 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:06:38.732 13:06:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.732 13:06:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.732 13:06:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:38.732 ************************************ 00:06:38.732 START TEST env_memory 00:06:38.732 ************************************ 00:06:38.732 13:06:49 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:06:38.732 00:06:38.732 00:06:38.732 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.732 http://cunit.sourceforge.net/ 00:06:38.732 00:06:38.732 00:06:38.732 Suite: memory 00:06:38.732 Test: alloc and free memory map ...[2024-07-25 13:06:49.094748] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:38.732 passed 00:06:38.732 Test: mem map translation ...[2024-07-25 13:06:49.121671] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:38.732 [2024-07-25 13:06:49.121694] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:38.732 [2024-07-25 13:06:49.121744] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:38.732 [2024-07-25 13:06:49.121756] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:38.732 passed 00:06:38.732 Test: mem map registration ...[2024-07-25 13:06:49.174858] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:38.732 [2024-07-25 13:06:49.174880] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:38.732 passed 00:06:38.994 Test: mem map adjacent registrations ...passed 00:06:38.994 00:06:38.994 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.994 suites 1 1 n/a 0 0 00:06:38.994 tests 4 4 4 0 0 00:06:38.994 asserts 152 152 152 0 n/a 00:06:38.994 00:06:38.994 Elapsed time = 0.183 seconds 00:06:38.994 00:06:38.994 real 0m0.198s 00:06:38.994 user 0m0.189s 00:06:38.994 sys 0m0.008s 00:06:38.994 13:06:49 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.994 13:06:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:38.994 ************************************ 00:06:38.994 END TEST env_memory 00:06:38.994 ************************************ 00:06:38.994 13:06:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:38.994 13:06:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.994 13:06:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.994 13:06:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:38.994 ************************************ 00:06:38.994 START TEST env_vtophys 00:06:38.994 ************************************ 00:06:38.994 13:06:49 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:38.994 EAL: lib.eal log level changed from notice to debug 00:06:38.994 EAL: Detected lcore 0 as core 0 on socket 0 00:06:38.994 EAL: Detected lcore 1 as core 1 on socket 0 00:06:38.994 EAL: Detected lcore 2 as core 2 on socket 0 00:06:38.994 EAL: Detected lcore 3 as core 3 on socket 0 00:06:38.994 EAL: Detected lcore 4 as core 4 on socket 0 00:06:38.994 EAL: Detected lcore 5 as core 5 on socket 0 00:06:38.994 EAL: Detected lcore 6 as core 6 on socket 0 00:06:38.994 EAL: Detected lcore 7 as core 8 on socket 0 00:06:38.994 EAL: Detected lcore 8 as core 9 on socket 0 00:06:38.994 EAL: Detected lcore 9 as core 10 on socket 0 00:06:38.994 EAL: Detected lcore 10 as core 11 on socket 0 00:06:38.994 EAL: Detected lcore 11 as core 12 on socket 0 00:06:38.994 EAL: Detected lcore 12 as core 13 on socket 0 00:06:38.994 EAL: Detected lcore 13 as core 14 on socket 0 00:06:38.994 EAL: Detected lcore 14 as core 16 on socket 0 00:06:38.994 EAL: Detected lcore 15 as core 17 on socket 0 00:06:38.994 EAL: Detected lcore 16 as core 18 on socket 0 00:06:38.994 EAL: Detected lcore 17 as core 19 on socket 0 00:06:38.994 EAL: Detected lcore 18 as core 20 on socket 0 00:06:38.994 EAL: Detected lcore 19 as core 21 on socket 0 00:06:38.994 EAL: Detected lcore 20 as core 22 on socket 0 00:06:38.994 EAL: Detected lcore 21 as core 24 on socket 0 00:06:38.994 EAL: Detected lcore 22 as core 25 on socket 0 00:06:38.994 EAL: Detected lcore 23 as core 26 on socket 0 00:06:38.994 EAL: Detected lcore 24 as core 27 on socket 0 00:06:38.994 EAL: Detected lcore 25 as core 28 on socket 0 00:06:38.994 EAL: Detected lcore 26 as core 29 on socket 0 00:06:38.994 EAL: Detected lcore 27 as core 30 on socket 0 00:06:38.994 EAL: Detected lcore 28 as core 0 on socket 1 00:06:38.994 EAL: Detected lcore 29 as core 1 on socket 1 00:06:38.994 EAL: Detected lcore 30 as core 2 on socket 1 00:06:38.994 EAL: Detected lcore 31 as core 3 on socket 1 00:06:38.994 EAL: Detected lcore 32 as core 4 on socket 1 00:06:38.994 EAL: Detected lcore 33 as core 5 on socket 1 00:06:38.994 EAL: Detected lcore 34 as core 6 on socket 1 00:06:38.994 EAL: Detected lcore 35 as core 8 on socket 1 00:06:38.994 EAL: Detected lcore 36 as core 9 on socket 1 00:06:38.994 EAL: Detected lcore 37 as core 10 on socket 1 00:06:38.994 EAL: Detected lcore 38 as core 11 on socket 1 00:06:38.995 EAL: Detected lcore 39 as core 12 on socket 1 00:06:38.995 EAL: Detected lcore 40 as core 13 on socket 1 00:06:38.995 EAL: Detected lcore 41 as core 14 on socket 1 00:06:38.995 EAL: Detected lcore 42 as core 16 on socket 1 00:06:38.995 EAL: Detected lcore 43 as core 17 on socket 1 00:06:38.995 EAL: Detected lcore 44 as core 18 on socket 1 00:06:38.995 EAL: Detected lcore 45 as core 19 on socket 1 00:06:38.995 EAL: Detected lcore 46 as core 20 on socket 1 00:06:38.995 EAL: Detected lcore 47 as core 21 on socket 1 00:06:38.995 EAL: Detected lcore 48 as core 22 on socket 1 00:06:38.995 EAL: Detected lcore 49 as core 24 on socket 1 00:06:38.995 EAL: Detected lcore 50 as core 25 on socket 1 00:06:38.995 EAL: Detected lcore 51 as core 26 on socket 1 00:06:38.995 EAL: Detected lcore 52 as core 27 on socket 1 00:06:38.995 EAL: Detected lcore 53 as core 28 on socket 1 00:06:38.995 EAL: Detected lcore 54 as core 29 on socket 1 00:06:38.995 EAL: Detected lcore 55 as core 30 on socket 1 00:06:38.995 EAL: Detected lcore 56 as core 0 on socket 0 00:06:38.995 EAL: Detected lcore 57 as core 1 on socket 0 00:06:38.995 EAL: Detected lcore 58 as core 2 on socket 0 00:06:38.995 EAL: Detected lcore 59 as core 3 on socket 0 00:06:38.995 EAL: Detected lcore 60 as core 4 on socket 0 00:06:38.995 EAL: Detected lcore 61 as core 5 on socket 0 00:06:38.995 EAL: Detected lcore 62 as core 6 on socket 0 00:06:38.995 EAL: Detected lcore 63 as core 8 on socket 0 00:06:38.995 EAL: Detected lcore 64 as core 9 on socket 0 00:06:38.995 EAL: Detected lcore 65 as core 10 on socket 0 00:06:38.995 EAL: Detected lcore 66 as core 11 on socket 0 00:06:38.995 EAL: Detected lcore 67 as core 12 on socket 0 00:06:38.995 EAL: Detected lcore 68 as core 13 on socket 0 00:06:38.995 EAL: Detected lcore 69 as core 14 on socket 0 00:06:38.995 EAL: Detected lcore 70 as core 16 on socket 0 00:06:38.995 EAL: Detected lcore 71 as core 17 on socket 0 00:06:38.995 EAL: Detected lcore 72 as core 18 on socket 0 00:06:38.995 EAL: Detected lcore 73 as core 19 on socket 0 00:06:38.995 EAL: Detected lcore 74 as core 20 on socket 0 00:06:38.995 EAL: Detected lcore 75 as core 21 on socket 0 00:06:38.995 EAL: Detected lcore 76 as core 22 on socket 0 00:06:38.995 EAL: Detected lcore 77 as core 24 on socket 0 00:06:38.995 EAL: Detected lcore 78 as core 25 on socket 0 00:06:38.995 EAL: Detected lcore 79 as core 26 on socket 0 00:06:38.995 EAL: Detected lcore 80 as core 27 on socket 0 00:06:38.995 EAL: Detected lcore 81 as core 28 on socket 0 00:06:38.995 EAL: Detected lcore 82 as core 29 on socket 0 00:06:38.995 EAL: Detected lcore 83 as core 30 on socket 0 00:06:38.995 EAL: Detected lcore 84 as core 0 on socket 1 00:06:38.995 EAL: Detected lcore 85 as core 1 on socket 1 00:06:38.995 EAL: Detected lcore 86 as core 2 on socket 1 00:06:38.995 EAL: Detected lcore 87 as core 3 on socket 1 00:06:38.995 EAL: Detected lcore 88 as core 4 on socket 1 00:06:38.995 EAL: Detected lcore 89 as core 5 on socket 1 00:06:38.995 EAL: Detected lcore 90 as core 6 on socket 1 00:06:38.995 EAL: Detected lcore 91 as core 8 on socket 1 00:06:38.995 EAL: Detected lcore 92 as core 9 on socket 1 00:06:38.995 EAL: Detected lcore 93 as core 10 on socket 1 00:06:38.995 EAL: Detected lcore 94 as core 11 on socket 1 00:06:38.995 EAL: Detected lcore 95 as core 12 on socket 1 00:06:38.995 EAL: Detected lcore 96 as core 13 on socket 1 00:06:38.995 EAL: Detected lcore 97 as core 14 on socket 1 00:06:38.995 EAL: Detected lcore 98 as core 16 on socket 1 00:06:38.995 EAL: Detected lcore 99 as core 17 on socket 1 00:06:38.995 EAL: Detected lcore 100 as core 18 on socket 1 00:06:38.995 EAL: Detected lcore 101 as core 19 on socket 1 00:06:38.995 EAL: Detected lcore 102 as core 20 on socket 1 00:06:38.995 EAL: Detected lcore 103 as core 21 on socket 1 00:06:38.995 EAL: Detected lcore 104 as core 22 on socket 1 00:06:38.995 EAL: Detected lcore 105 as core 24 on socket 1 00:06:38.995 EAL: Detected lcore 106 as core 25 on socket 1 00:06:38.995 EAL: Detected lcore 107 as core 26 on socket 1 00:06:38.995 EAL: Detected lcore 108 as core 27 on socket 1 00:06:38.995 EAL: Detected lcore 109 as core 28 on socket 1 00:06:38.995 EAL: Detected lcore 110 as core 29 on socket 1 00:06:38.995 EAL: Detected lcore 111 as core 30 on socket 1 00:06:38.995 EAL: Maximum logical cores by configuration: 128 00:06:38.995 EAL: Detected CPU lcores: 112 00:06:38.995 EAL: Detected NUMA nodes: 2 00:06:38.995 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:38.995 EAL: Detected shared linkage of DPDK 00:06:38.995 EAL: No shared files mode enabled, IPC will be disabled 00:06:38.995 EAL: No shared files mode enabled, IPC is disabled 00:06:38.995 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:06:38.995 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:06:38.995 EAL: Bus pci wants IOVA as 'PA' 00:06:38.995 EAL: Bus auxiliary wants IOVA as 'DC' 00:06:38.995 EAL: Bus vdev wants IOVA as 'DC' 00:06:38.995 EAL: Selected IOVA mode 'PA' 00:06:38.995 EAL: Probing VFIO support... 00:06:38.995 EAL: IOMMU type 1 (Type 1) is supported 00:06:38.995 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:38.995 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:38.995 EAL: VFIO support initialized 00:06:38.995 EAL: Ask a virtual area of 0x2e000 bytes 00:06:38.996 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:38.996 EAL: Setting up physically contiguous memory... 00:06:38.996 EAL: Setting maximum number of open files to 524288 00:06:38.996 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:38.996 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:38.996 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:38.996 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.996 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:38.996 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:38.996 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.996 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:38.996 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:38.996 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.996 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:38.996 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:38.996 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.996 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:38.996 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:38.996 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.996 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:38.996 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:38.996 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.996 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:38.996 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:38.996 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.996 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:38.996 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:38.996 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.996 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:38.996 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:38.996 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:38.996 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.996 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:38.996 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:38.996 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.996 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:38.996 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:38.996 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.996 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:38.996 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:38.996 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.996 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:38.996 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:38.996 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.996 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:38.996 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:38.996 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.996 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:38.996 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:38.996 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.996 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:38.996 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:38.996 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.996 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:38.996 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:38.996 EAL: Hugepages will be freed exactly as allocated. 00:06:38.996 EAL: No shared files mode enabled, IPC is disabled 00:06:38.996 EAL: No shared files mode enabled, IPC is disabled 00:06:38.996 EAL: TSC frequency is ~2500000 KHz 00:06:38.996 EAL: Main lcore 0 is ready (tid=7f99f8fc4b00;cpuset=[0]) 00:06:38.996 EAL: Trying to obtain current memory policy. 00:06:38.996 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.996 EAL: Restoring previous memory policy: 0 00:06:38.996 EAL: request: mp_malloc_sync 00:06:38.996 EAL: No shared files mode enabled, IPC is disabled 00:06:38.996 EAL: Heap on socket 0 was expanded by 2MB 00:06:38.996 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001000000 00:06:38.996 EAL: PCI memory mapped at 0x202001001000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001002000 00:06:38.996 EAL: PCI memory mapped at 0x202001003000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001004000 00:06:38.996 EAL: PCI memory mapped at 0x202001005000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001006000 00:06:38.996 EAL: PCI memory mapped at 0x202001007000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001008000 00:06:38.996 EAL: PCI memory mapped at 0x202001009000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x20200100a000 00:06:38.996 EAL: PCI memory mapped at 0x20200100b000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x20200100c000 00:06:38.996 EAL: PCI memory mapped at 0x20200100d000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x20200100e000 00:06:38.996 EAL: PCI memory mapped at 0x20200100f000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001010000 00:06:38.996 EAL: PCI memory mapped at 0x202001011000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001012000 00:06:38.996 EAL: PCI memory mapped at 0x202001013000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001014000 00:06:38.996 EAL: PCI memory mapped at 0x202001015000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001016000 00:06:38.996 EAL: PCI memory mapped at 0x202001017000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001018000 00:06:38.996 EAL: PCI memory mapped at 0x202001019000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x20200101a000 00:06:38.996 EAL: PCI memory mapped at 0x20200101b000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x20200101c000 00:06:38.996 EAL: PCI memory mapped at 0x20200101d000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:38.996 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x20200101e000 00:06:38.996 EAL: PCI memory mapped at 0x20200101f000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:38.996 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001020000 00:06:38.996 EAL: PCI memory mapped at 0x202001021000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:38.996 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001022000 00:06:38.996 EAL: PCI memory mapped at 0x202001023000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:38.996 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001024000 00:06:38.996 EAL: PCI memory mapped at 0x202001025000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:38.996 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001026000 00:06:38.996 EAL: PCI memory mapped at 0x202001027000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:38.996 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x202001028000 00:06:38.996 EAL: PCI memory mapped at 0x202001029000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:38.996 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x20200102a000 00:06:38.996 EAL: PCI memory mapped at 0x20200102b000 00:06:38.996 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:38.996 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:06:38.996 EAL: probe driver: 8086:37c9 qat 00:06:38.996 EAL: PCI memory mapped at 0x20200102c000 00:06:38.996 EAL: PCI memory mapped at 0x20200102d000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:38.997 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x20200102e000 00:06:38.997 EAL: PCI memory mapped at 0x20200102f000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:38.997 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001030000 00:06:38.997 EAL: PCI memory mapped at 0x202001031000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:38.997 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001032000 00:06:38.997 EAL: PCI memory mapped at 0x202001033000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:38.997 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001034000 00:06:38.997 EAL: PCI memory mapped at 0x202001035000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:38.997 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001036000 00:06:38.997 EAL: PCI memory mapped at 0x202001037000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:38.997 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001038000 00:06:38.997 EAL: PCI memory mapped at 0x202001039000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:38.997 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x20200103a000 00:06:38.997 EAL: PCI memory mapped at 0x20200103b000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:38.997 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x20200103c000 00:06:38.997 EAL: PCI memory mapped at 0x20200103d000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:38.997 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x20200103e000 00:06:38.997 EAL: PCI memory mapped at 0x20200103f000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001040000 00:06:38.997 EAL: PCI memory mapped at 0x202001041000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001042000 00:06:38.997 EAL: PCI memory mapped at 0x202001043000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001044000 00:06:38.997 EAL: PCI memory mapped at 0x202001045000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001046000 00:06:38.997 EAL: PCI memory mapped at 0x202001047000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001048000 00:06:38.997 EAL: PCI memory mapped at 0x202001049000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x20200104a000 00:06:38.997 EAL: PCI memory mapped at 0x20200104b000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x20200104c000 00:06:38.997 EAL: PCI memory mapped at 0x20200104d000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x20200104e000 00:06:38.997 EAL: PCI memory mapped at 0x20200104f000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001050000 00:06:38.997 EAL: PCI memory mapped at 0x202001051000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001052000 00:06:38.997 EAL: PCI memory mapped at 0x202001053000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001054000 00:06:38.997 EAL: PCI memory mapped at 0x202001055000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001056000 00:06:38.997 EAL: PCI memory mapped at 0x202001057000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001058000 00:06:38.997 EAL: PCI memory mapped at 0x202001059000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x20200105a000 00:06:38.997 EAL: PCI memory mapped at 0x20200105b000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x20200105c000 00:06:38.997 EAL: PCI memory mapped at 0x20200105d000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:38.997 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x20200105e000 00:06:38.997 EAL: PCI memory mapped at 0x20200105f000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:38.997 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001060000 00:06:38.997 EAL: PCI memory mapped at 0x202001061000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:38.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.997 EAL: PCI memory unmapped at 0x202001060000 00:06:38.997 EAL: PCI memory unmapped at 0x202001061000 00:06:38.997 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:38.997 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001062000 00:06:38.997 EAL: PCI memory mapped at 0x202001063000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:38.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.997 EAL: PCI memory unmapped at 0x202001062000 00:06:38.997 EAL: PCI memory unmapped at 0x202001063000 00:06:38.997 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:38.997 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001064000 00:06:38.997 EAL: PCI memory mapped at 0x202001065000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:38.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.997 EAL: PCI memory unmapped at 0x202001064000 00:06:38.997 EAL: PCI memory unmapped at 0x202001065000 00:06:38.997 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:38.997 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001066000 00:06:38.997 EAL: PCI memory mapped at 0x202001067000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:38.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.997 EAL: PCI memory unmapped at 0x202001066000 00:06:38.997 EAL: PCI memory unmapped at 0x202001067000 00:06:38.997 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:38.997 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x202001068000 00:06:38.997 EAL: PCI memory mapped at 0x202001069000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:38.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.997 EAL: PCI memory unmapped at 0x202001068000 00:06:38.997 EAL: PCI memory unmapped at 0x202001069000 00:06:38.997 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:38.997 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x20200106a000 00:06:38.997 EAL: PCI memory mapped at 0x20200106b000 00:06:38.997 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:38.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.997 EAL: PCI memory unmapped at 0x20200106a000 00:06:38.997 EAL: PCI memory unmapped at 0x20200106b000 00:06:38.997 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:38.997 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:06:38.997 EAL: probe driver: 8086:37c9 qat 00:06:38.997 EAL: PCI memory mapped at 0x20200106c000 00:06:38.997 EAL: PCI memory mapped at 0x20200106d000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x20200106c000 00:06:38.998 EAL: PCI memory unmapped at 0x20200106d000 00:06:38.998 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:38.998 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x20200106e000 00:06:38.998 EAL: PCI memory mapped at 0x20200106f000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x20200106e000 00:06:38.998 EAL: PCI memory unmapped at 0x20200106f000 00:06:38.998 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:38.998 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001070000 00:06:38.998 EAL: PCI memory mapped at 0x202001071000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x202001070000 00:06:38.998 EAL: PCI memory unmapped at 0x202001071000 00:06:38.998 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:38.998 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001072000 00:06:38.998 EAL: PCI memory mapped at 0x202001073000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x202001072000 00:06:38.998 EAL: PCI memory unmapped at 0x202001073000 00:06:38.998 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:38.998 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001074000 00:06:38.998 EAL: PCI memory mapped at 0x202001075000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x202001074000 00:06:38.998 EAL: PCI memory unmapped at 0x202001075000 00:06:38.998 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:38.998 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001076000 00:06:38.998 EAL: PCI memory mapped at 0x202001077000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x202001076000 00:06:38.998 EAL: PCI memory unmapped at 0x202001077000 00:06:38.998 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:38.998 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001078000 00:06:38.998 EAL: PCI memory mapped at 0x202001079000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x202001078000 00:06:38.998 EAL: PCI memory unmapped at 0x202001079000 00:06:38.998 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:38.998 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x20200107a000 00:06:38.998 EAL: PCI memory mapped at 0x20200107b000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x20200107a000 00:06:38.998 EAL: PCI memory unmapped at 0x20200107b000 00:06:38.998 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:38.998 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x20200107c000 00:06:38.998 EAL: PCI memory mapped at 0x20200107d000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x20200107c000 00:06:38.998 EAL: PCI memory unmapped at 0x20200107d000 00:06:38.998 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:38.998 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x20200107e000 00:06:38.998 EAL: PCI memory mapped at 0x20200107f000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x20200107e000 00:06:38.998 EAL: PCI memory unmapped at 0x20200107f000 00:06:38.998 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:38.998 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001080000 00:06:38.998 EAL: PCI memory mapped at 0x202001081000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x202001080000 00:06:38.998 EAL: PCI memory unmapped at 0x202001081000 00:06:38.998 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:38.998 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001082000 00:06:38.998 EAL: PCI memory mapped at 0x202001083000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x202001082000 00:06:38.998 EAL: PCI memory unmapped at 0x202001083000 00:06:38.998 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:38.998 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001084000 00:06:38.998 EAL: PCI memory mapped at 0x202001085000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x202001084000 00:06:38.998 EAL: PCI memory unmapped at 0x202001085000 00:06:38.998 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:38.998 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001086000 00:06:38.998 EAL: PCI memory mapped at 0x202001087000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x202001086000 00:06:38.998 EAL: PCI memory unmapped at 0x202001087000 00:06:38.998 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:38.998 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001088000 00:06:38.998 EAL: PCI memory mapped at 0x202001089000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x202001088000 00:06:38.998 EAL: PCI memory unmapped at 0x202001089000 00:06:38.998 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:38.998 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x20200108a000 00:06:38.998 EAL: PCI memory mapped at 0x20200108b000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x20200108a000 00:06:38.998 EAL: PCI memory unmapped at 0x20200108b000 00:06:38.998 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:38.998 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x20200108c000 00:06:38.998 EAL: PCI memory mapped at 0x20200108d000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x20200108c000 00:06:38.998 EAL: PCI memory unmapped at 0x20200108d000 00:06:38.998 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:38.998 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x20200108e000 00:06:38.998 EAL: PCI memory mapped at 0x20200108f000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x20200108e000 00:06:38.998 EAL: PCI memory unmapped at 0x20200108f000 00:06:38.998 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:38.998 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001090000 00:06:38.998 EAL: PCI memory mapped at 0x202001091000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x202001090000 00:06:38.998 EAL: PCI memory unmapped at 0x202001091000 00:06:38.998 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:38.998 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001092000 00:06:38.998 EAL: PCI memory mapped at 0x202001093000 00:06:38.998 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:38.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.998 EAL: PCI memory unmapped at 0x202001092000 00:06:38.998 EAL: PCI memory unmapped at 0x202001093000 00:06:38.998 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:38.998 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:06:38.998 EAL: probe driver: 8086:37c9 qat 00:06:38.998 EAL: PCI memory mapped at 0x202001094000 00:06:38.998 EAL: PCI memory mapped at 0x202001095000 00:06:38.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:38.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.999 EAL: PCI memory unmapped at 0x202001094000 00:06:38.999 EAL: PCI memory unmapped at 0x202001095000 00:06:38.999 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:38.999 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:06:38.999 EAL: probe driver: 8086:37c9 qat 00:06:38.999 EAL: PCI memory mapped at 0x202001096000 00:06:38.999 EAL: PCI memory mapped at 0x202001097000 00:06:38.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:38.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.999 EAL: PCI memory unmapped at 0x202001096000 00:06:38.999 EAL: PCI memory unmapped at 0x202001097000 00:06:38.999 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:38.999 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:06:38.999 EAL: probe driver: 8086:37c9 qat 00:06:38.999 EAL: PCI memory mapped at 0x202001098000 00:06:38.999 EAL: PCI memory mapped at 0x202001099000 00:06:38.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:38.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.999 EAL: PCI memory unmapped at 0x202001098000 00:06:38.999 EAL: PCI memory unmapped at 0x202001099000 00:06:38.999 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:38.999 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:06:38.999 EAL: probe driver: 8086:37c9 qat 00:06:38.999 EAL: PCI memory mapped at 0x20200109a000 00:06:38.999 EAL: PCI memory mapped at 0x20200109b000 00:06:38.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:38.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.999 EAL: PCI memory unmapped at 0x20200109a000 00:06:38.999 EAL: PCI memory unmapped at 0x20200109b000 00:06:38.999 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:38.999 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:06:38.999 EAL: probe driver: 8086:37c9 qat 00:06:38.999 EAL: PCI memory mapped at 0x20200109c000 00:06:38.999 EAL: PCI memory mapped at 0x20200109d000 00:06:38.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:38.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.999 EAL: PCI memory unmapped at 0x20200109c000 00:06:38.999 EAL: PCI memory unmapped at 0x20200109d000 00:06:38.999 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:38.999 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:06:38.999 EAL: probe driver: 8086:37c9 qat 00:06:38.999 EAL: PCI memory mapped at 0x20200109e000 00:06:38.999 EAL: PCI memory mapped at 0x20200109f000 00:06:38.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:38.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:38.999 EAL: PCI memory unmapped at 0x20200109e000 00:06:38.999 EAL: PCI memory unmapped at 0x20200109f000 00:06:38.999 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:38.999 EAL: No shared files mode enabled, IPC is disabled 00:06:38.999 EAL: No shared files mode enabled, IPC is disabled 00:06:38.999 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:38.999 EAL: Mem event callback 'spdk:(nil)' registered 00:06:39.259 00:06:39.259 00:06:39.259 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.259 http://cunit.sourceforge.net/ 00:06:39.259 00:06:39.259 00:06:39.259 Suite: components_suite 00:06:39.259 Test: vtophys_malloc_test ...passed 00:06:39.259 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:39.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.259 EAL: Restoring previous memory policy: 4 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was expanded by 4MB 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was shrunk by 4MB 00:06:39.259 EAL: Trying to obtain current memory policy. 00:06:39.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.259 EAL: Restoring previous memory policy: 4 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was expanded by 6MB 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was shrunk by 6MB 00:06:39.259 EAL: Trying to obtain current memory policy. 00:06:39.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.259 EAL: Restoring previous memory policy: 4 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was expanded by 10MB 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was shrunk by 10MB 00:06:39.259 EAL: Trying to obtain current memory policy. 00:06:39.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.259 EAL: Restoring previous memory policy: 4 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was expanded by 18MB 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was shrunk by 18MB 00:06:39.259 EAL: Trying to obtain current memory policy. 00:06:39.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.259 EAL: Restoring previous memory policy: 4 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was expanded by 34MB 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was shrunk by 34MB 00:06:39.259 EAL: Trying to obtain current memory policy. 00:06:39.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.259 EAL: Restoring previous memory policy: 4 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was expanded by 66MB 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was shrunk by 66MB 00:06:39.259 EAL: Trying to obtain current memory policy. 00:06:39.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.259 EAL: Restoring previous memory policy: 4 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was expanded by 130MB 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was shrunk by 130MB 00:06:39.259 EAL: Trying to obtain current memory policy. 00:06:39.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.259 EAL: Restoring previous memory policy: 4 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was expanded by 258MB 00:06:39.259 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.259 EAL: request: mp_malloc_sync 00:06:39.259 EAL: No shared files mode enabled, IPC is disabled 00:06:39.259 EAL: Heap on socket 0 was shrunk by 258MB 00:06:39.259 EAL: Trying to obtain current memory policy. 00:06:39.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.519 EAL: Restoring previous memory policy: 4 00:06:39.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.519 EAL: request: mp_malloc_sync 00:06:39.519 EAL: No shared files mode enabled, IPC is disabled 00:06:39.519 EAL: Heap on socket 0 was expanded by 514MB 00:06:39.519 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.519 EAL: request: mp_malloc_sync 00:06:39.519 EAL: No shared files mode enabled, IPC is disabled 00:06:39.519 EAL: Heap on socket 0 was shrunk by 514MB 00:06:39.519 EAL: Trying to obtain current memory policy. 00:06:39.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.778 EAL: Restoring previous memory policy: 4 00:06:39.778 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.778 EAL: request: mp_malloc_sync 00:06:39.778 EAL: No shared files mode enabled, IPC is disabled 00:06:39.778 EAL: Heap on socket 0 was expanded by 1026MB 00:06:40.037 EAL: Calling mem event callback 'spdk:(nil)' 00:06:40.297 EAL: request: mp_malloc_sync 00:06:40.297 EAL: No shared files mode enabled, IPC is disabled 00:06:40.297 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:40.297 passed 00:06:40.297 00:06:40.297 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.297 suites 1 1 n/a 0 0 00:06:40.297 tests 2 2 2 0 0 00:06:40.297 asserts 6506 6506 6506 0 n/a 00:06:40.297 00:06:40.297 Elapsed time = 1.020 seconds 00:06:40.297 EAL: No shared files mode enabled, IPC is disabled 00:06:40.297 EAL: No shared files mode enabled, IPC is disabled 00:06:40.297 EAL: No shared files mode enabled, IPC is disabled 00:06:40.297 00:06:40.297 real 0m1.215s 00:06:40.297 user 0m0.670s 00:06:40.297 sys 0m0.519s 00:06:40.297 13:06:50 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.297 13:06:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:40.297 ************************************ 00:06:40.297 END TEST env_vtophys 00:06:40.297 ************************************ 00:06:40.297 13:06:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:06:40.297 13:06:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.297 13:06:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.297 13:06:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:40.297 ************************************ 00:06:40.297 START TEST env_pci 00:06:40.297 ************************************ 00:06:40.297 13:06:50 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:06:40.297 00:06:40.297 00:06:40.297 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.297 http://cunit.sourceforge.net/ 00:06:40.297 00:06:40.297 00:06:40.297 Suite: pci 00:06:40.297 Test: pci_hook ...[2024-07-25 13:06:50.619689] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 771755 has claimed it 00:06:40.297 EAL: Cannot find device (10000:00:01.0) 00:06:40.297 EAL: Failed to attach device on primary process 00:06:40.297 passed 00:06:40.297 00:06:40.297 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.297 suites 1 1 n/a 0 0 00:06:40.297 tests 1 1 1 0 0 00:06:40.297 asserts 25 25 25 0 n/a 00:06:40.297 00:06:40.297 Elapsed time = 0.031 seconds 00:06:40.297 00:06:40.297 real 0m0.046s 00:06:40.297 user 0m0.010s 00:06:40.297 sys 0m0.036s 00:06:40.297 13:06:50 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.297 13:06:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:40.297 ************************************ 00:06:40.297 END TEST env_pci 00:06:40.297 ************************************ 00:06:40.297 13:06:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:40.297 13:06:50 env -- env/env.sh@15 -- # uname 00:06:40.297 13:06:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:40.297 13:06:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:40.297 13:06:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:40.297 13:06:50 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:40.297 13:06:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.297 13:06:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:40.297 ************************************ 00:06:40.297 START TEST env_dpdk_post_init 00:06:40.297 ************************************ 00:06:40.297 13:06:50 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:40.297 EAL: Detected CPU lcores: 112 00:06:40.297 EAL: Detected NUMA nodes: 2 00:06:40.297 EAL: Detected shared linkage of DPDK 00:06:40.297 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:40.559 EAL: Selected IOVA mode 'PA' 00:06:40.559 EAL: VFIO support initialized 00:06:40.559 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.559 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.559 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.559 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.559 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.559 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.559 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:40.559 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:06:40.559 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.560 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:06:40.560 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.560 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:40.561 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:06:40.561 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:40.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.561 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:40.561 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:40.562 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:40.562 EAL: Using IOMMU type 1 (Type 1) 00:06:40.562 EAL: Ignore mapping IO port bar(1) 00:06:40.562 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:40.562 EAL: Ignore mapping IO port bar(1) 00:06:40.562 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:40.562 EAL: Ignore mapping IO port bar(1) 00:06:40.562 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:40.562 EAL: Ignore mapping IO port bar(1) 00:06:40.562 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:40.562 EAL: Ignore mapping IO port bar(1) 00:06:40.562 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:40.562 EAL: Ignore mapping IO port bar(1) 00:06:40.562 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:40.562 EAL: Ignore mapping IO port bar(1) 00:06:40.562 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:40.562 EAL: Ignore mapping IO port bar(1) 00:06:40.562 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:40.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:40.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.562 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:40.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:40.822 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.822 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:40.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:40.822 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.822 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:40.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:40.822 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.822 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:40.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:40.822 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.822 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:40.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:40.822 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.822 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:40.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:40.822 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.822 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:40.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:40.822 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.822 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:40.822 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:40.822 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:40.822 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:40.822 EAL: Ignore mapping IO port bar(1) 00:06:40.822 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:40.822 EAL: Ignore mapping IO port bar(1) 00:06:40.822 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:40.822 EAL: Ignore mapping IO port bar(1) 00:06:40.822 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:40.822 EAL: Ignore mapping IO port bar(1) 00:06:40.822 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:40.822 EAL: Ignore mapping IO port bar(1) 00:06:40.822 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:40.822 EAL: Ignore mapping IO port bar(1) 00:06:40.822 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:40.822 EAL: Ignore mapping IO port bar(1) 00:06:40.822 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:40.822 EAL: Ignore mapping IO port bar(1) 00:06:40.822 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:41.761 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:06:45.999 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:06:45.999 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:06:45.999 Starting DPDK initialization... 00:06:45.999 Starting SPDK post initialization... 00:06:45.999 SPDK NVMe probe 00:06:45.999 Attaching to 0000:d8:00.0 00:06:45.999 Attached to 0000:d8:00.0 00:06:45.999 Cleaning up... 00:06:45.999 00:06:45.999 real 0m5.397s 00:06:45.999 user 0m3.999s 00:06:45.999 sys 0m0.448s 00:06:45.999 13:06:56 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.999 13:06:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:45.999 ************************************ 00:06:45.999 END TEST env_dpdk_post_init 00:06:45.999 ************************************ 00:06:45.999 13:06:56 env -- env/env.sh@26 -- # uname 00:06:45.999 13:06:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:45.999 13:06:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:45.999 13:06:56 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.999 13:06:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.999 13:06:56 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.999 ************************************ 00:06:45.999 START TEST env_mem_callbacks 00:06:45.999 ************************************ 00:06:45.999 13:06:56 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:45.999 EAL: Detected CPU lcores: 112 00:06:45.999 EAL: Detected NUMA nodes: 2 00:06:45.999 EAL: Detected shared linkage of DPDK 00:06:45.999 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:45.999 EAL: Selected IOVA mode 'PA' 00:06:45.999 EAL: VFIO support initialized 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:06:45.999 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:45.999 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:45.999 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:06:46.000 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.000 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:46.000 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:06:46.001 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.001 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:06:46.001 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:46.001 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:06:46.001 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:46.001 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:06:46.001 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:46.001 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:46.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.001 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:46.001 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:46.001 00:06:46.001 00:06:46.001 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.001 http://cunit.sourceforge.net/ 00:06:46.001 00:06:46.001 00:06:46.001 Suite: memory 00:06:46.001 Test: test ... 00:06:46.001 register 0x200000200000 2097152 00:06:46.001 malloc 3145728 00:06:46.001 register 0x200000400000 4194304 00:06:46.001 buf 0x200000500000 len 3145728 PASSED 00:06:46.001 malloc 64 00:06:46.001 buf 0x2000004fff40 len 64 PASSED 00:06:46.001 malloc 4194304 00:06:46.001 register 0x200000800000 6291456 00:06:46.001 buf 0x200000a00000 len 4194304 PASSED 00:06:46.001 free 0x200000500000 3145728 00:06:46.001 free 0x2000004fff40 64 00:06:46.001 unregister 0x200000400000 4194304 PASSED 00:06:46.001 free 0x200000a00000 4194304 00:06:46.001 unregister 0x200000800000 6291456 PASSED 00:06:46.001 malloc 8388608 00:06:46.001 register 0x200000400000 10485760 00:06:46.001 buf 0x200000600000 len 8388608 PASSED 00:06:46.001 free 0x200000600000 8388608 00:06:46.001 unregister 0x200000400000 10485760 PASSED 00:06:46.001 passed 00:06:46.001 00:06:46.001 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.001 suites 1 1 n/a 0 0 00:06:46.001 tests 1 1 1 0 0 00:06:46.001 asserts 15 15 15 0 n/a 00:06:46.001 00:06:46.001 Elapsed time = 0.005 seconds 00:06:46.001 00:06:46.001 real 0m0.114s 00:06:46.001 user 0m0.041s 00:06:46.001 sys 0m0.072s 00:06:46.001 13:06:56 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.001 13:06:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:46.001 ************************************ 00:06:46.001 END TEST env_mem_callbacks 00:06:46.001 ************************************ 00:06:46.001 00:06:46.001 real 0m7.483s 00:06:46.001 user 0m5.073s 00:06:46.001 sys 0m1.473s 00:06:46.001 13:06:56 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.001 13:06:56 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.001 ************************************ 00:06:46.001 END TEST env 00:06:46.001 ************************************ 00:06:46.001 13:06:56 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:06:46.001 13:06:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.001 13:06:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.001 13:06:56 -- common/autotest_common.sh@10 -- # set +x 00:06:46.001 ************************************ 00:06:46.001 START TEST rpc 00:06:46.001 ************************************ 00:06:46.001 13:06:56 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:06:46.261 * Looking for test storage... 00:06:46.261 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:46.261 13:06:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=772818 00:06:46.261 13:06:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:46.261 13:06:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.261 13:06:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 772818 00:06:46.261 13:06:56 rpc -- common/autotest_common.sh@831 -- # '[' -z 772818 ']' 00:06:46.261 13:06:56 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.261 13:06:56 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.261 13:06:56 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.261 13:06:56 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.261 13:06:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.261 [2024-07-25 13:06:56.646664] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:06:46.261 [2024-07-25 13:06:56.646722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772818 ] 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:46.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.261 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:46.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.262 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:46.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.262 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:46.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.262 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:46.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.262 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:46.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.262 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:46.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.262 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:46.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.262 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:46.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:46.262 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:46.521 [2024-07-25 13:06:56.779635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.521 [2024-07-25 13:06:56.867722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:46.521 [2024-07-25 13:06:56.867768] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 772818' to capture a snapshot of events at runtime. 00:06:46.521 [2024-07-25 13:06:56.867781] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.521 [2024-07-25 13:06:56.867793] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.521 [2024-07-25 13:06:56.867803] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid772818 for offline analysis/debug. 00:06:46.521 [2024-07-25 13:06:56.867839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.780 13:06:57 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.780 13:06:57 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:46.780 13:06:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:46.780 13:06:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:46.780 13:06:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:46.780 13:06:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:46.780 13:06:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.780 13:06:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.780 13:06:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.780 ************************************ 00:06:46.780 START TEST rpc_integrity 00:06:46.780 ************************************ 00:06:46.780 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:46.780 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:46.780 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.780 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.780 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.780 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:46.780 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:46.780 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:46.780 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:46.780 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.781 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.781 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.781 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:46.781 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:46.781 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.781 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.781 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.781 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:46.781 { 00:06:46.781 "name": "Malloc0", 00:06:46.781 "aliases": [ 00:06:46.781 "0f82dd35-3ff2-4e8c-8eb3-dabd55c332d9" 00:06:46.781 ], 00:06:46.781 "product_name": "Malloc disk", 00:06:46.781 "block_size": 512, 00:06:46.781 "num_blocks": 16384, 00:06:46.781 "uuid": "0f82dd35-3ff2-4e8c-8eb3-dabd55c332d9", 00:06:46.781 "assigned_rate_limits": { 00:06:46.781 "rw_ios_per_sec": 0, 00:06:46.781 "rw_mbytes_per_sec": 0, 00:06:46.781 "r_mbytes_per_sec": 0, 00:06:46.781 "w_mbytes_per_sec": 0 00:06:46.781 }, 00:06:46.781 "claimed": false, 00:06:46.781 "zoned": false, 00:06:46.781 "supported_io_types": { 00:06:46.781 "read": true, 00:06:46.781 "write": true, 00:06:46.781 "unmap": true, 00:06:46.781 "flush": true, 00:06:46.781 "reset": true, 00:06:46.781 "nvme_admin": false, 00:06:46.781 "nvme_io": false, 00:06:46.781 "nvme_io_md": false, 00:06:46.781 "write_zeroes": true, 00:06:46.781 "zcopy": true, 00:06:46.781 "get_zone_info": false, 00:06:46.781 "zone_management": false, 00:06:46.781 "zone_append": false, 00:06:46.781 "compare": false, 00:06:46.781 "compare_and_write": false, 00:06:46.781 "abort": true, 00:06:46.781 "seek_hole": false, 00:06:46.781 "seek_data": false, 00:06:46.781 "copy": true, 00:06:46.781 "nvme_iov_md": false 00:06:46.781 }, 00:06:46.781 "memory_domains": [ 00:06:46.781 { 00:06:46.781 "dma_device_id": "system", 00:06:46.781 "dma_device_type": 1 00:06:46.781 }, 00:06:46.781 { 00:06:46.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.781 "dma_device_type": 2 00:06:46.781 } 00:06:46.781 ], 00:06:46.781 "driver_specific": {} 00:06:46.781 } 00:06:46.781 ]' 00:06:46.781 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:47.040 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:47.040 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:47.040 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.040 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.040 [2024-07-25 13:06:57.293472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:47.040 [2024-07-25 13:06:57.293509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.040 [2024-07-25 13:06:57.293527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb785f0 00:06:47.040 [2024-07-25 13:06:57.293539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.040 [2024-07-25 13:06:57.294997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.040 [2024-07-25 13:06:57.295024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:47.040 Passthru0 00:06:47.040 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.040 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:47.040 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.040 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.040 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.040 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:47.040 { 00:06:47.040 "name": "Malloc0", 00:06:47.041 "aliases": [ 00:06:47.041 "0f82dd35-3ff2-4e8c-8eb3-dabd55c332d9" 00:06:47.041 ], 00:06:47.041 "product_name": "Malloc disk", 00:06:47.041 "block_size": 512, 00:06:47.041 "num_blocks": 16384, 00:06:47.041 "uuid": "0f82dd35-3ff2-4e8c-8eb3-dabd55c332d9", 00:06:47.041 "assigned_rate_limits": { 00:06:47.041 "rw_ios_per_sec": 0, 00:06:47.041 "rw_mbytes_per_sec": 0, 00:06:47.041 "r_mbytes_per_sec": 0, 00:06:47.041 "w_mbytes_per_sec": 0 00:06:47.041 }, 00:06:47.041 "claimed": true, 00:06:47.041 "claim_type": "exclusive_write", 00:06:47.041 "zoned": false, 00:06:47.041 "supported_io_types": { 00:06:47.041 "read": true, 00:06:47.041 "write": true, 00:06:47.041 "unmap": true, 00:06:47.041 "flush": true, 00:06:47.041 "reset": true, 00:06:47.041 "nvme_admin": false, 00:06:47.041 "nvme_io": false, 00:06:47.041 "nvme_io_md": false, 00:06:47.041 "write_zeroes": true, 00:06:47.041 "zcopy": true, 00:06:47.041 "get_zone_info": false, 00:06:47.041 "zone_management": false, 00:06:47.041 "zone_append": false, 00:06:47.041 "compare": false, 00:06:47.041 "compare_and_write": false, 00:06:47.041 "abort": true, 00:06:47.041 "seek_hole": false, 00:06:47.041 "seek_data": false, 00:06:47.041 "copy": true, 00:06:47.041 "nvme_iov_md": false 00:06:47.041 }, 00:06:47.041 "memory_domains": [ 00:06:47.041 { 00:06:47.041 "dma_device_id": "system", 00:06:47.041 "dma_device_type": 1 00:06:47.041 }, 00:06:47.041 { 00:06:47.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.041 "dma_device_type": 2 00:06:47.041 } 00:06:47.041 ], 00:06:47.041 "driver_specific": {} 00:06:47.041 }, 00:06:47.041 { 00:06:47.041 "name": "Passthru0", 00:06:47.041 "aliases": [ 00:06:47.041 "0a3608b5-5e82-56da-8956-57f33cdddd0b" 00:06:47.041 ], 00:06:47.041 "product_name": "passthru", 00:06:47.041 "block_size": 512, 00:06:47.041 "num_blocks": 16384, 00:06:47.041 "uuid": "0a3608b5-5e82-56da-8956-57f33cdddd0b", 00:06:47.041 "assigned_rate_limits": { 00:06:47.041 "rw_ios_per_sec": 0, 00:06:47.041 "rw_mbytes_per_sec": 0, 00:06:47.041 "r_mbytes_per_sec": 0, 00:06:47.041 "w_mbytes_per_sec": 0 00:06:47.041 }, 00:06:47.041 "claimed": false, 00:06:47.041 "zoned": false, 00:06:47.041 "supported_io_types": { 00:06:47.041 "read": true, 00:06:47.041 "write": true, 00:06:47.041 "unmap": true, 00:06:47.041 "flush": true, 00:06:47.041 "reset": true, 00:06:47.041 "nvme_admin": false, 00:06:47.041 "nvme_io": false, 00:06:47.041 "nvme_io_md": false, 00:06:47.041 "write_zeroes": true, 00:06:47.041 "zcopy": true, 00:06:47.041 "get_zone_info": false, 00:06:47.041 "zone_management": false, 00:06:47.041 "zone_append": false, 00:06:47.041 "compare": false, 00:06:47.041 "compare_and_write": false, 00:06:47.041 "abort": true, 00:06:47.041 "seek_hole": false, 00:06:47.041 "seek_data": false, 00:06:47.041 "copy": true, 00:06:47.041 "nvme_iov_md": false 00:06:47.041 }, 00:06:47.041 "memory_domains": [ 00:06:47.041 { 00:06:47.041 "dma_device_id": "system", 00:06:47.041 "dma_device_type": 1 00:06:47.041 }, 00:06:47.041 { 00:06:47.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.041 "dma_device_type": 2 00:06:47.041 } 00:06:47.041 ], 00:06:47.041 "driver_specific": { 00:06:47.041 "passthru": { 00:06:47.041 "name": "Passthru0", 00:06:47.041 "base_bdev_name": "Malloc0" 00:06:47.041 } 00:06:47.041 } 00:06:47.041 } 00:06:47.041 ]' 00:06:47.041 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:47.041 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:47.041 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:47.041 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.041 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.041 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.041 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:47.041 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.041 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.041 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.041 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:47.041 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.041 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.041 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.041 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:47.041 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:47.041 13:06:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:47.041 00:06:47.041 real 0m0.304s 00:06:47.041 user 0m0.184s 00:06:47.041 sys 0m0.056s 00:06:47.041 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.041 13:06:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.041 ************************************ 00:06:47.041 END TEST rpc_integrity 00:06:47.041 ************************************ 00:06:47.041 13:06:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:47.041 13:06:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.041 13:06:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.041 13:06:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.300 ************************************ 00:06:47.300 START TEST rpc_plugins 00:06:47.300 ************************************ 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:47.300 13:06:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.300 13:06:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:47.300 13:06:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.300 13:06:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:47.300 { 00:06:47.300 "name": "Malloc1", 00:06:47.300 "aliases": [ 00:06:47.300 "1c481e18-2cd9-4f78-aa86-98dd71b2495b" 00:06:47.300 ], 00:06:47.300 "product_name": "Malloc disk", 00:06:47.300 "block_size": 4096, 00:06:47.300 "num_blocks": 256, 00:06:47.300 "uuid": "1c481e18-2cd9-4f78-aa86-98dd71b2495b", 00:06:47.300 "assigned_rate_limits": { 00:06:47.300 "rw_ios_per_sec": 0, 00:06:47.300 "rw_mbytes_per_sec": 0, 00:06:47.300 "r_mbytes_per_sec": 0, 00:06:47.300 "w_mbytes_per_sec": 0 00:06:47.300 }, 00:06:47.300 "claimed": false, 00:06:47.300 "zoned": false, 00:06:47.300 "supported_io_types": { 00:06:47.300 "read": true, 00:06:47.300 "write": true, 00:06:47.300 "unmap": true, 00:06:47.300 "flush": true, 00:06:47.300 "reset": true, 00:06:47.300 "nvme_admin": false, 00:06:47.300 "nvme_io": false, 00:06:47.300 "nvme_io_md": false, 00:06:47.300 "write_zeroes": true, 00:06:47.300 "zcopy": true, 00:06:47.300 "get_zone_info": false, 00:06:47.300 "zone_management": false, 00:06:47.300 "zone_append": false, 00:06:47.300 "compare": false, 00:06:47.300 "compare_and_write": false, 00:06:47.300 "abort": true, 00:06:47.300 "seek_hole": false, 00:06:47.300 "seek_data": false, 00:06:47.300 "copy": true, 00:06:47.300 "nvme_iov_md": false 00:06:47.300 }, 00:06:47.300 "memory_domains": [ 00:06:47.300 { 00:06:47.300 "dma_device_id": "system", 00:06:47.300 "dma_device_type": 1 00:06:47.300 }, 00:06:47.300 { 00:06:47.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.300 "dma_device_type": 2 00:06:47.300 } 00:06:47.300 ], 00:06:47.300 "driver_specific": {} 00:06:47.300 } 00:06:47.300 ]' 00:06:47.300 13:06:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:47.300 13:06:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:47.300 13:06:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.300 13:06:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.300 13:06:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:47.300 13:06:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:47.300 13:06:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:47.300 00:06:47.300 real 0m0.151s 00:06:47.300 user 0m0.093s 00:06:47.300 sys 0m0.025s 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.300 13:06:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.300 ************************************ 00:06:47.300 END TEST rpc_plugins 00:06:47.301 ************************************ 00:06:47.301 13:06:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:47.301 13:06:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.301 13:06:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.301 13:06:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.301 ************************************ 00:06:47.301 START TEST rpc_trace_cmd_test 00:06:47.301 ************************************ 00:06:47.301 13:06:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:47.301 13:06:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:47.301 13:06:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:47.301 13:06:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.301 13:06:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.301 13:06:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.301 13:06:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:47.301 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid772818", 00:06:47.301 "tpoint_group_mask": "0x8", 00:06:47.301 "iscsi_conn": { 00:06:47.301 "mask": "0x2", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 }, 00:06:47.301 "scsi": { 00:06:47.301 "mask": "0x4", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 }, 00:06:47.301 "bdev": { 00:06:47.301 "mask": "0x8", 00:06:47.301 "tpoint_mask": "0xffffffffffffffff" 00:06:47.301 }, 00:06:47.301 "nvmf_rdma": { 00:06:47.301 "mask": "0x10", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 }, 00:06:47.301 "nvmf_tcp": { 00:06:47.301 "mask": "0x20", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 }, 00:06:47.301 "ftl": { 00:06:47.301 "mask": "0x40", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 }, 00:06:47.301 "blobfs": { 00:06:47.301 "mask": "0x80", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 }, 00:06:47.301 "dsa": { 00:06:47.301 "mask": "0x200", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 }, 00:06:47.301 "thread": { 00:06:47.301 "mask": "0x400", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 }, 00:06:47.301 "nvme_pcie": { 00:06:47.301 "mask": "0x800", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 }, 00:06:47.301 "iaa": { 00:06:47.301 "mask": "0x1000", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 }, 00:06:47.301 "nvme_tcp": { 00:06:47.301 "mask": "0x2000", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 }, 00:06:47.301 "bdev_nvme": { 00:06:47.301 "mask": "0x4000", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 }, 00:06:47.301 "sock": { 00:06:47.301 "mask": "0x8000", 00:06:47.301 "tpoint_mask": "0x0" 00:06:47.301 } 00:06:47.301 }' 00:06:47.560 13:06:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:47.560 13:06:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:47.560 13:06:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:47.560 13:06:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:47.560 13:06:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:47.560 13:06:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:47.560 13:06:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:47.560 13:06:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:47.560 13:06:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:47.560 13:06:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:47.560 00:06:47.560 real 0m0.242s 00:06:47.560 user 0m0.195s 00:06:47.560 sys 0m0.039s 00:06:47.560 13:06:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.560 13:06:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.560 ************************************ 00:06:47.560 END TEST rpc_trace_cmd_test 00:06:47.560 ************************************ 00:06:47.560 13:06:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:47.560 13:06:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:47.560 13:06:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:47.560 13:06:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.560 13:06:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.560 13:06:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.820 ************************************ 00:06:47.820 START TEST rpc_daemon_integrity 00:06:47.820 ************************************ 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:47.820 { 00:06:47.820 "name": "Malloc2", 00:06:47.820 "aliases": [ 00:06:47.820 "76f2e9a0-9b5d-4bc2-b744-4b737514f34b" 00:06:47.820 ], 00:06:47.820 "product_name": "Malloc disk", 00:06:47.820 "block_size": 512, 00:06:47.820 "num_blocks": 16384, 00:06:47.820 "uuid": "76f2e9a0-9b5d-4bc2-b744-4b737514f34b", 00:06:47.820 "assigned_rate_limits": { 00:06:47.820 "rw_ios_per_sec": 0, 00:06:47.820 "rw_mbytes_per_sec": 0, 00:06:47.820 "r_mbytes_per_sec": 0, 00:06:47.820 "w_mbytes_per_sec": 0 00:06:47.820 }, 00:06:47.820 "claimed": false, 00:06:47.820 "zoned": false, 00:06:47.820 "supported_io_types": { 00:06:47.820 "read": true, 00:06:47.820 "write": true, 00:06:47.820 "unmap": true, 00:06:47.820 "flush": true, 00:06:47.820 "reset": true, 00:06:47.820 "nvme_admin": false, 00:06:47.820 "nvme_io": false, 00:06:47.820 "nvme_io_md": false, 00:06:47.820 "write_zeroes": true, 00:06:47.820 "zcopy": true, 00:06:47.820 "get_zone_info": false, 00:06:47.820 "zone_management": false, 00:06:47.820 "zone_append": false, 00:06:47.820 "compare": false, 00:06:47.820 "compare_and_write": false, 00:06:47.820 "abort": true, 00:06:47.820 "seek_hole": false, 00:06:47.820 "seek_data": false, 00:06:47.820 "copy": true, 00:06:47.820 "nvme_iov_md": false 00:06:47.820 }, 00:06:47.820 "memory_domains": [ 00:06:47.820 { 00:06:47.820 "dma_device_id": "system", 00:06:47.820 "dma_device_type": 1 00:06:47.820 }, 00:06:47.820 { 00:06:47.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.820 "dma_device_type": 2 00:06:47.820 } 00:06:47.820 ], 00:06:47.820 "driver_specific": {} 00:06:47.820 } 00:06:47.820 ]' 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.820 [2024-07-25 13:06:58.224096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:47.820 [2024-07-25 13:06:58.224131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.820 [2024-07-25 13:06:58.224157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb782b0 00:06:47.820 [2024-07-25 13:06:58.224169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.820 [2024-07-25 13:06:58.225409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.820 [2024-07-25 13:06:58.225435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:47.820 Passthru0 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.820 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:47.820 { 00:06:47.820 "name": "Malloc2", 00:06:47.820 "aliases": [ 00:06:47.820 "76f2e9a0-9b5d-4bc2-b744-4b737514f34b" 00:06:47.820 ], 00:06:47.820 "product_name": "Malloc disk", 00:06:47.820 "block_size": 512, 00:06:47.820 "num_blocks": 16384, 00:06:47.820 "uuid": "76f2e9a0-9b5d-4bc2-b744-4b737514f34b", 00:06:47.820 "assigned_rate_limits": { 00:06:47.820 "rw_ios_per_sec": 0, 00:06:47.820 "rw_mbytes_per_sec": 0, 00:06:47.820 "r_mbytes_per_sec": 0, 00:06:47.820 "w_mbytes_per_sec": 0 00:06:47.820 }, 00:06:47.820 "claimed": true, 00:06:47.820 "claim_type": "exclusive_write", 00:06:47.820 "zoned": false, 00:06:47.820 "supported_io_types": { 00:06:47.820 "read": true, 00:06:47.820 "write": true, 00:06:47.820 "unmap": true, 00:06:47.820 "flush": true, 00:06:47.820 "reset": true, 00:06:47.820 "nvme_admin": false, 00:06:47.820 "nvme_io": false, 00:06:47.820 "nvme_io_md": false, 00:06:47.820 "write_zeroes": true, 00:06:47.820 "zcopy": true, 00:06:47.820 "get_zone_info": false, 00:06:47.820 "zone_management": false, 00:06:47.820 "zone_append": false, 00:06:47.820 "compare": false, 00:06:47.820 "compare_and_write": false, 00:06:47.820 "abort": true, 00:06:47.820 "seek_hole": false, 00:06:47.820 "seek_data": false, 00:06:47.820 "copy": true, 00:06:47.820 "nvme_iov_md": false 00:06:47.820 }, 00:06:47.820 "memory_domains": [ 00:06:47.820 { 00:06:47.820 "dma_device_id": "system", 00:06:47.820 "dma_device_type": 1 00:06:47.820 }, 00:06:47.820 { 00:06:47.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.820 "dma_device_type": 2 00:06:47.820 } 00:06:47.820 ], 00:06:47.820 "driver_specific": {} 00:06:47.820 }, 00:06:47.820 { 00:06:47.820 "name": "Passthru0", 00:06:47.820 "aliases": [ 00:06:47.820 "2cb6fc00-90ce-503c-9e9c-2fc2b5b8696f" 00:06:47.820 ], 00:06:47.820 "product_name": "passthru", 00:06:47.820 "block_size": 512, 00:06:47.820 "num_blocks": 16384, 00:06:47.820 "uuid": "2cb6fc00-90ce-503c-9e9c-2fc2b5b8696f", 00:06:47.820 "assigned_rate_limits": { 00:06:47.820 "rw_ios_per_sec": 0, 00:06:47.820 "rw_mbytes_per_sec": 0, 00:06:47.820 "r_mbytes_per_sec": 0, 00:06:47.820 "w_mbytes_per_sec": 0 00:06:47.820 }, 00:06:47.820 "claimed": false, 00:06:47.820 "zoned": false, 00:06:47.820 "supported_io_types": { 00:06:47.820 "read": true, 00:06:47.820 "write": true, 00:06:47.820 "unmap": true, 00:06:47.820 "flush": true, 00:06:47.820 "reset": true, 00:06:47.820 "nvme_admin": false, 00:06:47.820 "nvme_io": false, 00:06:47.820 "nvme_io_md": false, 00:06:47.820 "write_zeroes": true, 00:06:47.820 "zcopy": true, 00:06:47.821 "get_zone_info": false, 00:06:47.821 "zone_management": false, 00:06:47.821 "zone_append": false, 00:06:47.821 "compare": false, 00:06:47.821 "compare_and_write": false, 00:06:47.821 "abort": true, 00:06:47.821 "seek_hole": false, 00:06:47.821 "seek_data": false, 00:06:47.821 "copy": true, 00:06:47.821 "nvme_iov_md": false 00:06:47.821 }, 00:06:47.821 "memory_domains": [ 00:06:47.821 { 00:06:47.821 "dma_device_id": "system", 00:06:47.821 "dma_device_type": 1 00:06:47.821 }, 00:06:47.821 { 00:06:47.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.821 "dma_device_type": 2 00:06:47.821 } 00:06:47.821 ], 00:06:47.821 "driver_specific": { 00:06:47.821 "passthru": { 00:06:47.821 "name": "Passthru0", 00:06:47.821 "base_bdev_name": "Malloc2" 00:06:47.821 } 00:06:47.821 } 00:06:47.821 } 00:06:47.821 ]' 00:06:47.821 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:48.080 00:06:48.080 real 0m0.296s 00:06:48.080 user 0m0.180s 00:06:48.080 sys 0m0.050s 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.080 13:06:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.080 ************************************ 00:06:48.080 END TEST rpc_daemon_integrity 00:06:48.080 ************************************ 00:06:48.080 13:06:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:48.080 13:06:58 rpc -- rpc/rpc.sh@84 -- # killprocess 772818 00:06:48.080 13:06:58 rpc -- common/autotest_common.sh@950 -- # '[' -z 772818 ']' 00:06:48.080 13:06:58 rpc -- common/autotest_common.sh@954 -- # kill -0 772818 00:06:48.080 13:06:58 rpc -- common/autotest_common.sh@955 -- # uname 00:06:48.080 13:06:58 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.080 13:06:58 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 772818 00:06:48.080 13:06:58 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.080 13:06:58 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.080 13:06:58 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 772818' 00:06:48.080 killing process with pid 772818 00:06:48.080 13:06:58 rpc -- common/autotest_common.sh@969 -- # kill 772818 00:06:48.080 13:06:58 rpc -- common/autotest_common.sh@974 -- # wait 772818 00:06:48.340 00:06:48.340 real 0m2.331s 00:06:48.340 user 0m3.134s 00:06:48.340 sys 0m0.896s 00:06:48.340 13:06:58 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.340 13:06:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.340 ************************************ 00:06:48.340 END TEST rpc 00:06:48.340 ************************************ 00:06:48.599 13:06:58 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:48.599 13:06:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.599 13:06:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.599 13:06:58 -- common/autotest_common.sh@10 -- # set +x 00:06:48.599 ************************************ 00:06:48.599 START TEST skip_rpc 00:06:48.599 ************************************ 00:06:48.599 13:06:58 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:48.599 * Looking for test storage... 00:06:48.599 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:48.599 13:06:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:48.599 13:06:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:06:48.599 13:06:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:48.599 13:06:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.599 13:06:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.599 13:06:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.599 ************************************ 00:06:48.599 START TEST skip_rpc 00:06:48.599 ************************************ 00:06:48.599 13:06:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:48.599 13:06:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=773444 00:06:48.599 13:06:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.599 13:06:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:48.599 13:06:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:48.859 [2024-07-25 13:06:59.094405] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:06:48.859 [2024-07-25 13:06:59.094463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773444 ] 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:48.859 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.859 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:48.859 [2024-07-25 13:06:59.227115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.859 [2024-07-25 13:06:59.309641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 773444 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 773444 ']' 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 773444 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 773444 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 773444' 00:06:54.132 killing process with pid 773444 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 773444 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 773444 00:06:54.132 00:06:54.132 real 0m5.408s 00:06:54.132 user 0m5.077s 00:06:54.132 sys 0m0.353s 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.132 13:07:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.132 ************************************ 00:06:54.132 END TEST skip_rpc 00:06:54.132 ************************************ 00:06:54.132 13:07:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:54.132 13:07:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.132 13:07:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.132 13:07:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.132 ************************************ 00:06:54.132 START TEST skip_rpc_with_json 00:06:54.132 ************************************ 00:06:54.132 13:07:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:54.132 13:07:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:54.132 13:07:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=774439 00:06:54.132 13:07:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.132 13:07:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.132 13:07:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 774439 00:06:54.132 13:07:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 774439 ']' 00:06:54.132 13:07:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.132 13:07:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.132 13:07:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.132 13:07:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.132 13:07:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.132 [2024-07-25 13:07:04.594693] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:06:54.132 [2024-07-25 13:07:04.594753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774439 ] 00:06:54.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:54.392 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.392 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:54.392 [2024-07-25 13:07:04.727477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.392 [2024-07-25 13:07:04.813308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.330 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.330 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:55.330 13:07:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:55.330 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.330 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.330 [2024-07-25 13:07:05.489297] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:55.330 request: 00:06:55.330 { 00:06:55.330 "trtype": "tcp", 00:06:55.330 "method": "nvmf_get_transports", 00:06:55.330 "req_id": 1 00:06:55.330 } 00:06:55.330 Got JSON-RPC error response 00:06:55.330 response: 00:06:55.330 { 00:06:55.330 "code": -19, 00:06:55.330 "message": "No such device" 00:06:55.330 } 00:06:55.330 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.331 [2024-07-25 13:07:05.501433] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:55.331 { 00:06:55.331 "subsystems": [ 00:06:55.331 { 00:06:55.331 "subsystem": "keyring", 00:06:55.331 "config": [] 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "iobuf", 00:06:55.331 "config": [ 00:06:55.331 { 00:06:55.331 "method": "iobuf_set_options", 00:06:55.331 "params": { 00:06:55.331 "small_pool_count": 8192, 00:06:55.331 "large_pool_count": 1024, 00:06:55.331 "small_bufsize": 8192, 00:06:55.331 "large_bufsize": 135168 00:06:55.331 } 00:06:55.331 } 00:06:55.331 ] 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "sock", 00:06:55.331 "config": [ 00:06:55.331 { 00:06:55.331 "method": "sock_set_default_impl", 00:06:55.331 "params": { 00:06:55.331 "impl_name": "posix" 00:06:55.331 } 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "method": "sock_impl_set_options", 00:06:55.331 "params": { 00:06:55.331 "impl_name": "ssl", 00:06:55.331 "recv_buf_size": 4096, 00:06:55.331 "send_buf_size": 4096, 00:06:55.331 "enable_recv_pipe": true, 00:06:55.331 "enable_quickack": false, 00:06:55.331 "enable_placement_id": 0, 00:06:55.331 "enable_zerocopy_send_server": true, 00:06:55.331 "enable_zerocopy_send_client": false, 00:06:55.331 "zerocopy_threshold": 0, 00:06:55.331 "tls_version": 0, 00:06:55.331 "enable_ktls": false 00:06:55.331 } 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "method": "sock_impl_set_options", 00:06:55.331 "params": { 00:06:55.331 "impl_name": "posix", 00:06:55.331 "recv_buf_size": 2097152, 00:06:55.331 "send_buf_size": 2097152, 00:06:55.331 "enable_recv_pipe": true, 00:06:55.331 "enable_quickack": false, 00:06:55.331 "enable_placement_id": 0, 00:06:55.331 "enable_zerocopy_send_server": true, 00:06:55.331 "enable_zerocopy_send_client": false, 00:06:55.331 "zerocopy_threshold": 0, 00:06:55.331 "tls_version": 0, 00:06:55.331 "enable_ktls": false 00:06:55.331 } 00:06:55.331 } 00:06:55.331 ] 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "vmd", 00:06:55.331 "config": [] 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "accel", 00:06:55.331 "config": [ 00:06:55.331 { 00:06:55.331 "method": "accel_set_options", 00:06:55.331 "params": { 00:06:55.331 "small_cache_size": 128, 00:06:55.331 "large_cache_size": 16, 00:06:55.331 "task_count": 2048, 00:06:55.331 "sequence_count": 2048, 00:06:55.331 "buf_count": 2048 00:06:55.331 } 00:06:55.331 } 00:06:55.331 ] 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "bdev", 00:06:55.331 "config": [ 00:06:55.331 { 00:06:55.331 "method": "bdev_set_options", 00:06:55.331 "params": { 00:06:55.331 "bdev_io_pool_size": 65535, 00:06:55.331 "bdev_io_cache_size": 256, 00:06:55.331 "bdev_auto_examine": true, 00:06:55.331 "iobuf_small_cache_size": 128, 00:06:55.331 "iobuf_large_cache_size": 16 00:06:55.331 } 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "method": "bdev_raid_set_options", 00:06:55.331 "params": { 00:06:55.331 "process_window_size_kb": 1024, 00:06:55.331 "process_max_bandwidth_mb_sec": 0 00:06:55.331 } 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "method": "bdev_iscsi_set_options", 00:06:55.331 "params": { 00:06:55.331 "timeout_sec": 30 00:06:55.331 } 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "method": "bdev_nvme_set_options", 00:06:55.331 "params": { 00:06:55.331 "action_on_timeout": "none", 00:06:55.331 "timeout_us": 0, 00:06:55.331 "timeout_admin_us": 0, 00:06:55.331 "keep_alive_timeout_ms": 10000, 00:06:55.331 "arbitration_burst": 0, 00:06:55.331 "low_priority_weight": 0, 00:06:55.331 "medium_priority_weight": 0, 00:06:55.331 "high_priority_weight": 0, 00:06:55.331 "nvme_adminq_poll_period_us": 10000, 00:06:55.331 "nvme_ioq_poll_period_us": 0, 00:06:55.331 "io_queue_requests": 0, 00:06:55.331 "delay_cmd_submit": true, 00:06:55.331 "transport_retry_count": 4, 00:06:55.331 "bdev_retry_count": 3, 00:06:55.331 "transport_ack_timeout": 0, 00:06:55.331 "ctrlr_loss_timeout_sec": 0, 00:06:55.331 "reconnect_delay_sec": 0, 00:06:55.331 "fast_io_fail_timeout_sec": 0, 00:06:55.331 "disable_auto_failback": false, 00:06:55.331 "generate_uuids": false, 00:06:55.331 "transport_tos": 0, 00:06:55.331 "nvme_error_stat": false, 00:06:55.331 "rdma_srq_size": 0, 00:06:55.331 "io_path_stat": false, 00:06:55.331 "allow_accel_sequence": false, 00:06:55.331 "rdma_max_cq_size": 0, 00:06:55.331 "rdma_cm_event_timeout_ms": 0, 00:06:55.331 "dhchap_digests": [ 00:06:55.331 "sha256", 00:06:55.331 "sha384", 00:06:55.331 "sha512" 00:06:55.331 ], 00:06:55.331 "dhchap_dhgroups": [ 00:06:55.331 "null", 00:06:55.331 "ffdhe2048", 00:06:55.331 "ffdhe3072", 00:06:55.331 "ffdhe4096", 00:06:55.331 "ffdhe6144", 00:06:55.331 "ffdhe8192" 00:06:55.331 ] 00:06:55.331 } 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "method": "bdev_nvme_set_hotplug", 00:06:55.331 "params": { 00:06:55.331 "period_us": 100000, 00:06:55.331 "enable": false 00:06:55.331 } 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "method": "bdev_wait_for_examine" 00:06:55.331 } 00:06:55.331 ] 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "scsi", 00:06:55.331 "config": null 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "scheduler", 00:06:55.331 "config": [ 00:06:55.331 { 00:06:55.331 "method": "framework_set_scheduler", 00:06:55.331 "params": { 00:06:55.331 "name": "static" 00:06:55.331 } 00:06:55.331 } 00:06:55.331 ] 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "vhost_scsi", 00:06:55.331 "config": [] 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "vhost_blk", 00:06:55.331 "config": [] 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "ublk", 00:06:55.331 "config": [] 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "nbd", 00:06:55.331 "config": [] 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "nvmf", 00:06:55.331 "config": [ 00:06:55.331 { 00:06:55.331 "method": "nvmf_set_config", 00:06:55.331 "params": { 00:06:55.331 "discovery_filter": "match_any", 00:06:55.331 "admin_cmd_passthru": { 00:06:55.331 "identify_ctrlr": false 00:06:55.331 } 00:06:55.331 } 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "method": "nvmf_set_max_subsystems", 00:06:55.331 "params": { 00:06:55.331 "max_subsystems": 1024 00:06:55.331 } 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "method": "nvmf_set_crdt", 00:06:55.331 "params": { 00:06:55.331 "crdt1": 0, 00:06:55.331 "crdt2": 0, 00:06:55.331 "crdt3": 0 00:06:55.331 } 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "method": "nvmf_create_transport", 00:06:55.331 "params": { 00:06:55.331 "trtype": "TCP", 00:06:55.331 "max_queue_depth": 128, 00:06:55.331 "max_io_qpairs_per_ctrlr": 127, 00:06:55.331 "in_capsule_data_size": 4096, 00:06:55.331 "max_io_size": 131072, 00:06:55.331 "io_unit_size": 131072, 00:06:55.331 "max_aq_depth": 128, 00:06:55.331 "num_shared_buffers": 511, 00:06:55.331 "buf_cache_size": 4294967295, 00:06:55.331 "dif_insert_or_strip": false, 00:06:55.331 "zcopy": false, 00:06:55.331 "c2h_success": true, 00:06:55.331 "sock_priority": 0, 00:06:55.331 "abort_timeout_sec": 1, 00:06:55.331 "ack_timeout": 0, 00:06:55.331 "data_wr_pool_size": 0 00:06:55.331 } 00:06:55.331 } 00:06:55.331 ] 00:06:55.331 }, 00:06:55.331 { 00:06:55.331 "subsystem": "iscsi", 00:06:55.331 "config": [ 00:06:55.331 { 00:06:55.331 "method": "iscsi_set_options", 00:06:55.331 "params": { 00:06:55.331 "node_base": "iqn.2016-06.io.spdk", 00:06:55.331 "max_sessions": 128, 00:06:55.331 "max_connections_per_session": 2, 00:06:55.331 "max_queue_depth": 64, 00:06:55.331 "default_time2wait": 2, 00:06:55.331 "default_time2retain": 20, 00:06:55.331 "first_burst_length": 8192, 00:06:55.331 "immediate_data": true, 00:06:55.331 "allow_duplicated_isid": false, 00:06:55.331 "error_recovery_level": 0, 00:06:55.331 "nop_timeout": 60, 00:06:55.331 "nop_in_interval": 30, 00:06:55.331 "disable_chap": false, 00:06:55.331 "require_chap": false, 00:06:55.331 "mutual_chap": false, 00:06:55.331 "chap_group": 0, 00:06:55.331 "max_large_datain_per_connection": 64, 00:06:55.331 "max_r2t_per_connection": 4, 00:06:55.331 "pdu_pool_size": 36864, 00:06:55.331 "immediate_data_pool_size": 16384, 00:06:55.331 "data_out_pool_size": 2048 00:06:55.331 } 00:06:55.331 } 00:06:55.331 ] 00:06:55.331 } 00:06:55.331 ] 00:06:55.331 } 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 774439 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 774439 ']' 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 774439 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 774439 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 774439' 00:06:55.331 killing process with pid 774439 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 774439 00:06:55.331 13:07:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 774439 00:06:55.591 13:07:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=774718 00:06:55.591 13:07:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:55.591 13:07:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:07:00.867 13:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 774718 00:07:00.867 13:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 774718 ']' 00:07:00.867 13:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 774718 00:07:00.867 13:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:00.867 13:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.867 13:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 774718 00:07:00.867 13:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.867 13:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.867 13:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 774718' 00:07:00.867 killing process with pid 774718 00:07:00.867 13:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 774718 00:07:00.867 13:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 774718 00:07:01.124 13:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:07:01.124 13:07:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:07:01.124 00:07:01.125 real 0m6.940s 00:07:01.125 user 0m6.658s 00:07:01.125 sys 0m0.831s 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:01.125 ************************************ 00:07:01.125 END TEST skip_rpc_with_json 00:07:01.125 ************************************ 00:07:01.125 13:07:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:01.125 13:07:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.125 13:07:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.125 13:07:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.125 ************************************ 00:07:01.125 START TEST skip_rpc_with_delay 00:07:01.125 ************************************ 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:01.125 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:01.384 [2024-07-25 13:07:11.619098] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:01.384 [2024-07-25 13:07:11.619200] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:01.384 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:01.384 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.384 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.384 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.384 00:07:01.384 real 0m0.092s 00:07:01.384 user 0m0.054s 00:07:01.384 sys 0m0.037s 00:07:01.384 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.384 13:07:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:01.384 ************************************ 00:07:01.384 END TEST skip_rpc_with_delay 00:07:01.384 ************************************ 00:07:01.384 13:07:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:01.384 13:07:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:01.384 13:07:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:01.384 13:07:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.384 13:07:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.384 13:07:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.384 ************************************ 00:07:01.384 START TEST exit_on_failed_rpc_init 00:07:01.384 ************************************ 00:07:01.384 13:07:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:01.384 13:07:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=775652 00:07:01.384 13:07:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 775652 00:07:01.384 13:07:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.384 13:07:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 775652 ']' 00:07:01.384 13:07:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.384 13:07:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.384 13:07:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.384 13:07:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.384 13:07:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:01.384 [2024-07-25 13:07:11.794242] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:01.384 [2024-07-25 13:07:11.794298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775652 ] 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:01.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.384 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:01.642 [2024-07-25 13:07:11.926827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.642 [2024-07-25 13:07:12.011271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:02.210 13:07:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:02.470 [2024-07-25 13:07:12.757248] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:02.470 [2024-07-25 13:07:12.757308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775906 ] 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:02.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.470 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:02.470 [2024-07-25 13:07:12.877086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.752 [2024-07-25 13:07:12.959524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.752 [2024-07-25 13:07:12.959602] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:02.752 [2024-07-25 13:07:12.959617] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:02.752 [2024-07-25 13:07:12.959628] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 775652 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 775652 ']' 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 775652 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 775652 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.752 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 775652' 00:07:02.753 killing process with pid 775652 00:07:02.753 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 775652 00:07:02.753 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 775652 00:07:03.051 00:07:03.051 real 0m1.707s 00:07:03.051 user 0m1.931s 00:07:03.051 sys 0m0.591s 00:07:03.051 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.051 13:07:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:03.051 ************************************ 00:07:03.051 END TEST exit_on_failed_rpc_init 00:07:03.051 ************************************ 00:07:03.051 13:07:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:07:03.051 00:07:03.051 real 0m14.597s 00:07:03.051 user 0m13.888s 00:07:03.051 sys 0m2.132s 00:07:03.051 13:07:13 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.051 13:07:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.051 ************************************ 00:07:03.051 END TEST skip_rpc 00:07:03.051 ************************************ 00:07:03.051 13:07:13 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:03.051 13:07:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.051 13:07:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.051 13:07:13 -- common/autotest_common.sh@10 -- # set +x 00:07:03.310 ************************************ 00:07:03.310 START TEST rpc_client 00:07:03.310 ************************************ 00:07:03.310 13:07:13 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:03.310 * Looking for test storage... 00:07:03.310 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:07:03.310 13:07:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:03.310 OK 00:07:03.310 13:07:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:03.310 00:07:03.310 real 0m0.144s 00:07:03.310 user 0m0.057s 00:07:03.310 sys 0m0.099s 00:07:03.310 13:07:13 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.310 13:07:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:03.310 ************************************ 00:07:03.310 END TEST rpc_client 00:07:03.310 ************************************ 00:07:03.310 13:07:13 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:07:03.310 13:07:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.310 13:07:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.310 13:07:13 -- common/autotest_common.sh@10 -- # set +x 00:07:03.310 ************************************ 00:07:03.310 START TEST json_config 00:07:03.310 ************************************ 00:07:03.310 13:07:13 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:07:03.569 13:07:13 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.569 13:07:13 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.569 13:07:13 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.569 13:07:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.569 13:07:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.569 13:07:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.569 13:07:13 json_config -- paths/export.sh@5 -- # export PATH 00:07:03.569 13:07:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@47 -- # : 0 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.569 13:07:13 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:03.569 INFO: JSON configuration test init 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:03.569 13:07:13 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:03.569 13:07:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:03.569 13:07:13 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:03.569 13:07:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.569 13:07:13 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:03.570 13:07:13 json_config -- json_config/common.sh@9 -- # local app=target 00:07:03.570 13:07:13 json_config -- json_config/common.sh@10 -- # shift 00:07:03.570 13:07:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:03.570 13:07:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:03.570 13:07:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:03.570 13:07:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:03.570 13:07:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:03.570 13:07:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=776282 00:07:03.570 13:07:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:03.570 Waiting for target to run... 00:07:03.570 13:07:13 json_config -- json_config/common.sh@25 -- # waitforlisten 776282 /var/tmp/spdk_tgt.sock 00:07:03.570 13:07:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:03.570 13:07:13 json_config -- common/autotest_common.sh@831 -- # '[' -z 776282 ']' 00:07:03.570 13:07:13 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:03.570 13:07:13 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.570 13:07:13 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:03.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:03.570 13:07:13 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.570 13:07:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.570 [2024-07-25 13:07:13.985417] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:03.570 [2024-07-25 13:07:13.985481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776282 ] 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:04.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.137 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:04.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.138 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:04.138 [2024-07-25 13:07:14.506533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.138 [2024-07-25 13:07:14.608754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.396 13:07:14 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.396 13:07:14 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:04.396 13:07:14 json_config -- json_config/common.sh@26 -- # echo '' 00:07:04.396 00:07:04.396 13:07:14 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:04.396 13:07:14 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:04.396 13:07:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:04.396 13:07:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.396 13:07:14 json_config -- json_config/json_config.sh@99 -- # [[ 1 -eq 1 ]] 00:07:04.396 13:07:14 json_config -- json_config/json_config.sh@100 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:07:04.396 13:07:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:07:04.655 13:07:15 json_config -- json_config/json_config.sh@101 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:07:04.655 13:07:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:07:04.912 [2024-07-25 13:07:15.258779] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:07:04.912 13:07:15 json_config -- json_config/json_config.sh@102 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:07:04.912 13:07:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:07:05.170 [2024-07-25 13:07:15.487363] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:07:05.170 13:07:15 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:05.170 13:07:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:05.170 13:07:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.170 13:07:15 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:05.170 13:07:15 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:05.170 13:07:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:05.429 [2024-07-25 13:07:15.776435] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:07:10.701 13:07:20 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:10.701 13:07:20 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:10.701 13:07:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.701 13:07:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.701 13:07:20 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:10.701 13:07:20 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:10.701 13:07:20 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:10.701 13:07:20 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:10.701 13:07:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:10.701 13:07:20 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@51 -- # sort 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:10.701 13:07:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.701 13:07:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:07:10.701 13:07:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.701 13:07:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:07:10.701 13:07:21 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:10.701 13:07:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:10.960 13:07:21 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:07:10.960 13:07:21 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:10.960 13:07:21 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:10.960 13:07:21 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:07:10.960 13:07:21 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:07:10.960 13:07:21 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:10.960 13:07:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:11.220 Nvme0n1p0 Nvme0n1p1 00:07:11.220 13:07:21 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:11.220 13:07:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:11.802 [2024-07-25 13:07:22.003694] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:11.802 [2024-07-25 13:07:22.003746] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:11.802 00:07:11.802 13:07:22 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:11.802 13:07:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:11.802 Malloc3 00:07:11.802 13:07:22 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:11.802 13:07:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:12.370 [2024-07-25 13:07:22.737763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:12.371 [2024-07-25 13:07:22.737814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.371 [2024-07-25 13:07:22.737833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10bcbd0 00:07:12.371 [2024-07-25 13:07:22.737844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.371 [2024-07-25 13:07:22.739274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.371 [2024-07-25 13:07:22.739305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:12.371 PTBdevFromMalloc3 00:07:12.371 13:07:22 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:12.371 13:07:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:12.630 Null0 00:07:12.630 13:07:22 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:12.630 13:07:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:12.889 Malloc0 00:07:12.889 13:07:23 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:12.889 13:07:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:13.149 Malloc1 00:07:13.149 13:07:23 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:13.149 13:07:23 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:13.408 102400+0 records in 00:07:13.408 102400+0 records out 00:07:13.408 104857600 bytes (105 MB, 100 MiB) copied, 0.283989 s, 369 MB/s 00:07:13.408 13:07:23 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:13.408 13:07:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:13.668 aio_disk 00:07:13.668 13:07:24 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:13.668 13:07:24 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:13.668 13:07:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:17.861 a030cab5-8578-4853-90e2-40f8e0b7aed8 00:07:17.861 13:07:28 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:17.861 13:07:28 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:17.861 13:07:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:18.121 13:07:28 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:18.121 13:07:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:18.121 13:07:28 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:18.121 13:07:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:18.380 13:07:28 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:18.380 13:07:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:18.639 13:07:29 json_config -- json_config/json_config.sh@161 -- # [[ 1 -eq 1 ]] 00:07:18.639 13:07:29 json_config -- json_config/json_config.sh@162 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:07:18.639 13:07:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:07:18.898 MallocForCryptoBdev 00:07:18.898 13:07:29 json_config -- json_config/json_config.sh@163 -- # lspci -d:37c8 00:07:18.898 13:07:29 json_config -- json_config/json_config.sh@163 -- # wc -l 00:07:18.898 13:07:29 json_config -- json_config/json_config.sh@163 -- # [[ 5 -eq 0 ]] 00:07:18.898 13:07:29 json_config -- json_config/json_config.sh@166 -- # local crypto_driver=crypto_qat 00:07:18.898 13:07:29 json_config -- json_config/json_config.sh@169 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:07:18.898 13:07:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:07:19.158 [2024-07-25 13:07:29.511821] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:07:19.158 CryptoMallocBdev 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@173 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:f141f473-f6ee-4c8b-b639-4169048270c0 bdev_register:b1ef4476-ce39-4411-90a1-556c26c9a7f7 bdev_register:82813f42-1790-4647-bf9e-b44ebe23403e bdev_register:4907ba24-f790-4935-8407-772de1c114b2 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:f141f473-f6ee-4c8b-b639-4169048270c0 bdev_register:b1ef4476-ce39-4411-90a1-556c26c9a7f7 bdev_register:82813f42-1790-4647-bf9e-b44ebe23403e bdev_register:4907ba24-f790-4935-8407-772de1c114b2 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@75 -- # sort 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@76 -- # sort 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:19.158 13:07:29 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:07:19.158 13:07:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:f141f473-f6ee-4c8b-b639-4169048270c0 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.417 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:b1ef4476-ce39-4411-90a1-556c26c9a7f7 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:82813f42-1790-4647-bf9e-b44ebe23403e 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:4907ba24-f790-4935-8407-772de1c114b2 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:MallocForCryptoBdev 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:CryptoMallocBdev 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:4907ba24-f790-4935-8407-772de1c114b2 bdev_register:82813f42-1790-4647-bf9e-b44ebe23403e bdev_register:aio_disk bdev_register:b1ef4476-ce39-4411-90a1-556c26c9a7f7 bdev_register:CryptoMallocBdev bdev_register:f141f473-f6ee-4c8b-b639-4169048270c0 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\9\0\7\b\a\2\4\-\f\7\9\0\-\4\9\3\5\-\8\4\0\7\-\7\7\2\d\e\1\c\1\1\4\b\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\2\8\1\3\f\4\2\-\1\7\9\0\-\4\6\4\7\-\b\f\9\e\-\b\4\4\e\b\e\2\3\4\0\3\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\1\e\f\4\4\7\6\-\c\e\3\9\-\4\4\1\1\-\9\0\a\1\-\5\5\6\c\2\6\c\9\a\7\f\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\1\4\1\f\4\7\3\-\f\6\e\e\-\4\c\8\b\-\b\6\3\9\-\4\1\6\9\0\4\8\2\7\0\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@90 -- # cat 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:4907ba24-f790-4935-8407-772de1c114b2 bdev_register:82813f42-1790-4647-bf9e-b44ebe23403e bdev_register:aio_disk bdev_register:b1ef4476-ce39-4411-90a1-556c26c9a7f7 bdev_register:CryptoMallocBdev bdev_register:f141f473-f6ee-4c8b-b639-4169048270c0 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:07:19.418 Expected events matched: 00:07:19.418 bdev_register:4907ba24-f790-4935-8407-772de1c114b2 00:07:19.418 bdev_register:82813f42-1790-4647-bf9e-b44ebe23403e 00:07:19.418 bdev_register:aio_disk 00:07:19.418 bdev_register:b1ef4476-ce39-4411-90a1-556c26c9a7f7 00:07:19.418 bdev_register:CryptoMallocBdev 00:07:19.418 bdev_register:f141f473-f6ee-4c8b-b639-4169048270c0 00:07:19.418 bdev_register:Malloc0 00:07:19.418 bdev_register:Malloc0p0 00:07:19.418 bdev_register:Malloc0p1 00:07:19.418 bdev_register:Malloc0p2 00:07:19.418 bdev_register:Malloc1 00:07:19.418 bdev_register:Malloc3 00:07:19.418 bdev_register:MallocForCryptoBdev 00:07:19.418 bdev_register:Null0 00:07:19.418 bdev_register:Nvme0n1 00:07:19.418 bdev_register:Nvme0n1p0 00:07:19.418 bdev_register:Nvme0n1p1 00:07:19.418 bdev_register:PTBdevFromMalloc3 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:07:19.418 13:07:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:19.418 13:07:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:19.418 13:07:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:19.418 13:07:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:19.418 13:07:29 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:19.418 13:07:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:19.677 MallocBdevForConfigChangeCheck 00:07:19.677 13:07:30 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:19.677 13:07:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:19.677 13:07:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.677 13:07:30 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:19.677 13:07:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:20.245 13:07:30 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:20.245 INFO: shutting down applications... 00:07:20.245 13:07:30 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:20.245 13:07:30 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:20.245 13:07:30 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:20.245 13:07:30 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:20.245 [2024-07-25 13:07:30.691395] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:22.848 Calling clear_iscsi_subsystem 00:07:22.848 Calling clear_nvmf_subsystem 00:07:22.848 Calling clear_nbd_subsystem 00:07:22.848 Calling clear_ublk_subsystem 00:07:22.848 Calling clear_vhost_blk_subsystem 00:07:22.848 Calling clear_vhost_scsi_subsystem 00:07:22.848 Calling clear_bdev_subsystem 00:07:22.848 13:07:33 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:07:22.848 13:07:33 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:22.848 13:07:33 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:22.848 13:07:33 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:22.848 13:07:33 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:22.848 13:07:33 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:23.415 13:07:33 json_config -- json_config/json_config.sh@349 -- # break 00:07:23.415 13:07:33 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:23.415 13:07:33 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:23.415 13:07:33 json_config -- json_config/common.sh@31 -- # local app=target 00:07:23.415 13:07:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:23.415 13:07:33 json_config -- json_config/common.sh@35 -- # [[ -n 776282 ]] 00:07:23.415 13:07:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 776282 00:07:23.415 13:07:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:23.415 13:07:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:23.415 13:07:33 json_config -- json_config/common.sh@41 -- # kill -0 776282 00:07:23.415 13:07:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:23.983 13:07:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:23.983 13:07:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:23.983 13:07:34 json_config -- json_config/common.sh@41 -- # kill -0 776282 00:07:23.983 13:07:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:23.983 13:07:34 json_config -- json_config/common.sh@43 -- # break 00:07:23.983 13:07:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:23.983 13:07:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:23.983 SPDK target shutdown done 00:07:23.983 13:07:34 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:23.983 INFO: relaunching applications... 00:07:23.983 13:07:34 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:23.984 13:07:34 json_config -- json_config/common.sh@9 -- # local app=target 00:07:23.984 13:07:34 json_config -- json_config/common.sh@10 -- # shift 00:07:23.984 13:07:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:23.984 13:07:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:23.984 13:07:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:23.984 13:07:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:23.984 13:07:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:23.984 13:07:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=779871 00:07:23.984 13:07:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:23.984 Waiting for target to run... 00:07:23.984 13:07:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:23.984 13:07:34 json_config -- json_config/common.sh@25 -- # waitforlisten 779871 /var/tmp/spdk_tgt.sock 00:07:23.984 13:07:34 json_config -- common/autotest_common.sh@831 -- # '[' -z 779871 ']' 00:07:23.984 13:07:34 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:23.984 13:07:34 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.984 13:07:34 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:23.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:23.984 13:07:34 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.984 13:07:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.984 [2024-07-25 13:07:34.236796] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:23.984 [2024-07-25 13:07:34.236863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779871 ] 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:24.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.243 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:24.244 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.244 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:24.244 [2024-07-25 13:07:34.604891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.244 [2024-07-25 13:07:34.681551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.503 [2024-07-25 13:07:34.735598] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:07:24.503 [2024-07-25 13:07:34.743632] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:07:24.503 [2024-07-25 13:07:34.751651] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:07:24.503 [2024-07-25 13:07:34.832442] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:07:27.040 [2024-07-25 13:07:36.972053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:27.040 [2024-07-25 13:07:36.972110] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:27.040 [2024-07-25 13:07:36.972123] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:27.040 [2024-07-25 13:07:36.980072] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:27.040 [2024-07-25 13:07:36.980096] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:27.040 [2024-07-25 13:07:36.988086] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:27.040 [2024-07-25 13:07:36.988108] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:27.040 [2024-07-25 13:07:36.996118] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:07:27.040 [2024-07-25 13:07:36.996146] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:07:27.040 [2024-07-25 13:07:36.996158] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:29.576 [2024-07-25 13:07:39.892833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:29.576 [2024-07-25 13:07:39.892878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.576 [2024-07-25 13:07:39.892893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17ec9f0 00:07:29.576 [2024-07-25 13:07:39.892904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.576 [2024-07-25 13:07:39.893173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.576 [2024-07-25 13:07:39.893189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:30.144 13:07:40 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.144 13:07:40 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:30.144 13:07:40 json_config -- json_config/common.sh@26 -- # echo '' 00:07:30.144 00:07:30.144 13:07:40 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:30.144 13:07:40 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:30.144 INFO: Checking if target configuration is the same... 00:07:30.144 13:07:40 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:30.144 13:07:40 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:30.144 13:07:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:30.144 + '[' 2 -ne 2 ']' 00:07:30.144 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:30.144 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:07:30.144 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:07:30.144 +++ basename /dev/fd/62 00:07:30.144 ++ mktemp /tmp/62.XXX 00:07:30.144 + tmp_file_1=/tmp/62.G9z 00:07:30.403 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:30.403 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:30.403 + tmp_file_2=/tmp/spdk_tgt_config.json.K8s 00:07:30.403 + ret=0 00:07:30.403 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:30.662 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:30.662 + diff -u /tmp/62.G9z /tmp/spdk_tgt_config.json.K8s 00:07:30.662 + echo 'INFO: JSON config files are the same' 00:07:30.662 INFO: JSON config files are the same 00:07:30.662 + rm /tmp/62.G9z /tmp/spdk_tgt_config.json.K8s 00:07:30.662 + exit 0 00:07:30.662 13:07:41 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:30.662 13:07:41 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:30.662 INFO: changing configuration and checking if this can be detected... 00:07:30.662 13:07:41 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:30.662 13:07:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:30.921 13:07:41 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:30.921 13:07:41 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:30.921 13:07:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:30.921 + '[' 2 -ne 2 ']' 00:07:30.921 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:30.921 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:07:30.921 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:07:30.921 +++ basename /dev/fd/62 00:07:30.921 ++ mktemp /tmp/62.XXX 00:07:30.921 + tmp_file_1=/tmp/62.2Y0 00:07:30.921 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:30.921 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:30.921 + tmp_file_2=/tmp/spdk_tgt_config.json.mHe 00:07:30.921 + ret=0 00:07:30.921 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:31.180 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:31.439 + diff -u /tmp/62.2Y0 /tmp/spdk_tgt_config.json.mHe 00:07:31.439 + ret=1 00:07:31.439 + echo '=== Start of file: /tmp/62.2Y0 ===' 00:07:31.439 + cat /tmp/62.2Y0 00:07:31.439 + echo '=== End of file: /tmp/62.2Y0 ===' 00:07:31.439 + echo '' 00:07:31.439 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mHe ===' 00:07:31.439 + cat /tmp/spdk_tgt_config.json.mHe 00:07:31.439 + echo '=== End of file: /tmp/spdk_tgt_config.json.mHe ===' 00:07:31.439 + echo '' 00:07:31.439 + rm /tmp/62.2Y0 /tmp/spdk_tgt_config.json.mHe 00:07:31.439 + exit 1 00:07:31.439 13:07:41 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:31.439 INFO: configuration change detected. 00:07:31.439 13:07:41 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:31.439 13:07:41 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:31.439 13:07:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:31.439 13:07:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:31.439 13:07:41 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:31.439 13:07:41 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:31.439 13:07:41 json_config -- json_config/json_config.sh@321 -- # [[ -n 779871 ]] 00:07:31.439 13:07:41 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:31.440 13:07:41 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:31.440 13:07:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:31.440 13:07:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:31.440 13:07:41 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:07:31.440 13:07:41 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:07:31.440 13:07:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:07:31.698 13:07:41 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:07:31.698 13:07:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:07:31.698 13:07:42 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:07:31.698 13:07:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:07:31.957 13:07:42 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:07:31.957 13:07:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:07:32.267 13:07:42 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:32.267 13:07:42 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:32.267 13:07:42 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:32.267 13:07:42 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:32.267 13:07:42 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:32.267 13:07:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.267 13:07:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.267 13:07:42 json_config -- json_config/json_config.sh@327 -- # killprocess 779871 00:07:32.267 13:07:42 json_config -- common/autotest_common.sh@950 -- # '[' -z 779871 ']' 00:07:32.267 13:07:42 json_config -- common/autotest_common.sh@954 -- # kill -0 779871 00:07:32.267 13:07:42 json_config -- common/autotest_common.sh@955 -- # uname 00:07:32.267 13:07:42 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.267 13:07:42 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 779871 00:07:32.526 13:07:42 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.526 13:07:42 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.526 13:07:42 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 779871' 00:07:32.526 killing process with pid 779871 00:07:32.526 13:07:42 json_config -- common/autotest_common.sh@969 -- # kill 779871 00:07:32.526 13:07:42 json_config -- common/autotest_common.sh@974 -- # wait 779871 00:07:34.741 13:07:45 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:34.741 13:07:45 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:34.742 13:07:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.742 13:07:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.001 13:07:45 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:35.001 13:07:45 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:35.001 INFO: Success 00:07:35.001 00:07:35.001 real 0m31.475s 00:07:35.001 user 0m36.790s 00:07:35.001 sys 0m3.872s 00:07:35.001 13:07:45 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.001 13:07:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.001 ************************************ 00:07:35.001 END TEST json_config 00:07:35.001 ************************************ 00:07:35.001 13:07:45 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:35.001 13:07:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.001 13:07:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.001 13:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:35.001 ************************************ 00:07:35.001 START TEST json_config_extra_key 00:07:35.001 ************************************ 00:07:35.001 13:07:45 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:35.001 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.001 13:07:45 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:07:35.001 13:07:45 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.001 13:07:45 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.001 13:07:45 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.001 13:07:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.002 13:07:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.002 13:07:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.002 13:07:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:35.002 13:07:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.002 13:07:45 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:35.002 13:07:45 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.002 13:07:45 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.002 13:07:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.002 13:07:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.002 13:07:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.002 13:07:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.002 13:07:45 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.002 13:07:45 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.002 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:07:35.002 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:35.002 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:35.002 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:35.002 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:35.002 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:35.002 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:35.002 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:35.002 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:35.002 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:35.002 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:35.002 INFO: launching applications... 00:07:35.002 13:07:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:07:35.002 13:07:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:35.002 13:07:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:35.002 13:07:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:35.002 13:07:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:35.002 13:07:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:35.002 13:07:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:35.002 13:07:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:35.002 13:07:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=781852 00:07:35.002 13:07:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:35.002 Waiting for target to run... 00:07:35.002 13:07:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 781852 /var/tmp/spdk_tgt.sock 00:07:35.002 13:07:45 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 781852 ']' 00:07:35.002 13:07:45 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:35.002 13:07:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:07:35.002 13:07:45 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.002 13:07:45 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:35.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:35.002 13:07:45 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.002 13:07:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:35.261 [2024-07-25 13:07:45.520743] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:35.261 [2024-07-25 13:07:45.520803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781852 ] 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:35.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.520 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:35.520 [2024-07-25 13:07:45.891418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.520 [2024-07-25 13:07:45.968396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.089 13:07:46 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.089 13:07:46 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:36.089 13:07:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:36.089 00:07:36.089 13:07:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:36.089 INFO: shutting down applications... 00:07:36.089 13:07:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:36.089 13:07:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:36.089 13:07:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:36.089 13:07:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 781852 ]] 00:07:36.089 13:07:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 781852 00:07:36.089 13:07:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:36.089 13:07:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:36.089 13:07:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 781852 00:07:36.089 13:07:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:36.657 13:07:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:36.657 13:07:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:36.657 13:07:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 781852 00:07:36.657 13:07:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:36.657 13:07:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:36.657 13:07:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:36.657 13:07:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:36.657 SPDK target shutdown done 00:07:36.657 13:07:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:36.657 Success 00:07:36.657 00:07:36.657 real 0m1.570s 00:07:36.657 user 0m1.189s 00:07:36.657 sys 0m0.494s 00:07:36.657 13:07:46 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.657 13:07:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:36.657 ************************************ 00:07:36.657 END TEST json_config_extra_key 00:07:36.657 ************************************ 00:07:36.657 13:07:46 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:36.657 13:07:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.657 13:07:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.657 13:07:46 -- common/autotest_common.sh@10 -- # set +x 00:07:36.657 ************************************ 00:07:36.657 START TEST alias_rpc 00:07:36.657 ************************************ 00:07:36.657 13:07:46 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:36.657 * Looking for test storage... 00:07:36.657 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:07:36.657 13:07:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:36.657 13:07:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=782165 00:07:36.657 13:07:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 782165 00:07:36.657 13:07:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:36.657 13:07:47 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 782165 ']' 00:07:36.657 13:07:47 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.657 13:07:47 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.657 13:07:47 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.657 13:07:47 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.657 13:07:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.917 [2024-07-25 13:07:47.150284] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:36.917 [2024-07-25 13:07:47.150347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782165 ] 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:36.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.917 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:36.917 [2024-07-25 13:07:47.284497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.917 [2024-07-25 13:07:47.371067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.892 13:07:47 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.892 13:07:47 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:37.892 13:07:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:37.892 13:07:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 782165 00:07:37.892 13:07:48 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 782165 ']' 00:07:37.892 13:07:48 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 782165 00:07:37.892 13:07:48 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:37.892 13:07:48 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.892 13:07:48 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 782165 00:07:37.892 13:07:48 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.892 13:07:48 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.892 13:07:48 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 782165' 00:07:37.892 killing process with pid 782165 00:07:37.892 13:07:48 alias_rpc -- common/autotest_common.sh@969 -- # kill 782165 00:07:37.892 13:07:48 alias_rpc -- common/autotest_common.sh@974 -- # wait 782165 00:07:38.151 00:07:38.151 real 0m1.628s 00:07:38.151 user 0m1.719s 00:07:38.151 sys 0m0.534s 00:07:38.151 13:07:48 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.151 13:07:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.151 ************************************ 00:07:38.151 END TEST alias_rpc 00:07:38.151 ************************************ 00:07:38.410 13:07:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:38.410 13:07:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:38.410 13:07:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.410 13:07:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.410 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:38.410 ************************************ 00:07:38.410 START TEST spdkcli_tcp 00:07:38.410 ************************************ 00:07:38.410 13:07:48 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:38.410 * Looking for test storage... 00:07:38.410 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:07:38.410 13:07:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:07:38.410 13:07:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:38.410 13:07:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:07:38.410 13:07:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:38.410 13:07:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:38.410 13:07:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:38.410 13:07:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:38.410 13:07:48 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:38.410 13:07:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:38.410 13:07:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=782556 00:07:38.410 13:07:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 782556 00:07:38.410 13:07:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:38.410 13:07:48 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 782556 ']' 00:07:38.410 13:07:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.410 13:07:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.410 13:07:48 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.410 13:07:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.410 13:07:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:38.410 [2024-07-25 13:07:48.883981] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:38.410 [2024-07-25 13:07:48.884047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782556 ] 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:38.670 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.670 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:38.670 [2024-07-25 13:07:49.017428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:38.670 [2024-07-25 13:07:49.101870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.670 [2024-07-25 13:07:49.101875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.607 13:07:49 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.607 13:07:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:39.607 13:07:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:39.607 13:07:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=782751 00:07:39.607 13:07:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:39.607 [ 00:07:39.607 "bdev_malloc_delete", 00:07:39.607 "bdev_malloc_create", 00:07:39.607 "bdev_null_resize", 00:07:39.607 "bdev_null_delete", 00:07:39.607 "bdev_null_create", 00:07:39.607 "bdev_nvme_cuse_unregister", 00:07:39.608 "bdev_nvme_cuse_register", 00:07:39.608 "bdev_opal_new_user", 00:07:39.608 "bdev_opal_set_lock_state", 00:07:39.608 "bdev_opal_delete", 00:07:39.608 "bdev_opal_get_info", 00:07:39.608 "bdev_opal_create", 00:07:39.608 "bdev_nvme_opal_revert", 00:07:39.608 "bdev_nvme_opal_init", 00:07:39.608 "bdev_nvme_send_cmd", 00:07:39.608 "bdev_nvme_get_path_iostat", 00:07:39.608 "bdev_nvme_get_mdns_discovery_info", 00:07:39.608 "bdev_nvme_stop_mdns_discovery", 00:07:39.608 "bdev_nvme_start_mdns_discovery", 00:07:39.608 "bdev_nvme_set_multipath_policy", 00:07:39.608 "bdev_nvme_set_preferred_path", 00:07:39.608 "bdev_nvme_get_io_paths", 00:07:39.608 "bdev_nvme_remove_error_injection", 00:07:39.608 "bdev_nvme_add_error_injection", 00:07:39.608 "bdev_nvme_get_discovery_info", 00:07:39.608 "bdev_nvme_stop_discovery", 00:07:39.608 "bdev_nvme_start_discovery", 00:07:39.608 "bdev_nvme_get_controller_health_info", 00:07:39.608 "bdev_nvme_disable_controller", 00:07:39.608 "bdev_nvme_enable_controller", 00:07:39.608 "bdev_nvme_reset_controller", 00:07:39.608 "bdev_nvme_get_transport_statistics", 00:07:39.608 "bdev_nvme_apply_firmware", 00:07:39.608 "bdev_nvme_detach_controller", 00:07:39.608 "bdev_nvme_get_controllers", 00:07:39.608 "bdev_nvme_attach_controller", 00:07:39.608 "bdev_nvme_set_hotplug", 00:07:39.608 "bdev_nvme_set_options", 00:07:39.608 "bdev_passthru_delete", 00:07:39.608 "bdev_passthru_create", 00:07:39.608 "bdev_lvol_set_parent_bdev", 00:07:39.608 "bdev_lvol_set_parent", 00:07:39.608 "bdev_lvol_check_shallow_copy", 00:07:39.608 "bdev_lvol_start_shallow_copy", 00:07:39.608 "bdev_lvol_grow_lvstore", 00:07:39.608 "bdev_lvol_get_lvols", 00:07:39.608 "bdev_lvol_get_lvstores", 00:07:39.608 "bdev_lvol_delete", 00:07:39.608 "bdev_lvol_set_read_only", 00:07:39.608 "bdev_lvol_resize", 00:07:39.608 "bdev_lvol_decouple_parent", 00:07:39.608 "bdev_lvol_inflate", 00:07:39.608 "bdev_lvol_rename", 00:07:39.608 "bdev_lvol_clone_bdev", 00:07:39.608 "bdev_lvol_clone", 00:07:39.608 "bdev_lvol_snapshot", 00:07:39.608 "bdev_lvol_create", 00:07:39.608 "bdev_lvol_delete_lvstore", 00:07:39.608 "bdev_lvol_rename_lvstore", 00:07:39.608 "bdev_lvol_create_lvstore", 00:07:39.608 "bdev_raid_set_options", 00:07:39.608 "bdev_raid_remove_base_bdev", 00:07:39.608 "bdev_raid_add_base_bdev", 00:07:39.608 "bdev_raid_delete", 00:07:39.608 "bdev_raid_create", 00:07:39.608 "bdev_raid_get_bdevs", 00:07:39.608 "bdev_error_inject_error", 00:07:39.608 "bdev_error_delete", 00:07:39.608 "bdev_error_create", 00:07:39.608 "bdev_split_delete", 00:07:39.608 "bdev_split_create", 00:07:39.608 "bdev_delay_delete", 00:07:39.608 "bdev_delay_create", 00:07:39.608 "bdev_delay_update_latency", 00:07:39.608 "bdev_zone_block_delete", 00:07:39.608 "bdev_zone_block_create", 00:07:39.608 "blobfs_create", 00:07:39.608 "blobfs_detect", 00:07:39.608 "blobfs_set_cache_size", 00:07:39.608 "bdev_crypto_delete", 00:07:39.608 "bdev_crypto_create", 00:07:39.608 "bdev_compress_delete", 00:07:39.608 "bdev_compress_create", 00:07:39.608 "bdev_compress_get_orphans", 00:07:39.608 "bdev_aio_delete", 00:07:39.608 "bdev_aio_rescan", 00:07:39.608 "bdev_aio_create", 00:07:39.608 "bdev_ftl_set_property", 00:07:39.608 "bdev_ftl_get_properties", 00:07:39.608 "bdev_ftl_get_stats", 00:07:39.608 "bdev_ftl_unmap", 00:07:39.608 "bdev_ftl_unload", 00:07:39.608 "bdev_ftl_delete", 00:07:39.608 "bdev_ftl_load", 00:07:39.608 "bdev_ftl_create", 00:07:39.608 "bdev_virtio_attach_controller", 00:07:39.608 "bdev_virtio_scsi_get_devices", 00:07:39.608 "bdev_virtio_detach_controller", 00:07:39.608 "bdev_virtio_blk_set_hotplug", 00:07:39.608 "bdev_iscsi_delete", 00:07:39.608 "bdev_iscsi_create", 00:07:39.608 "bdev_iscsi_set_options", 00:07:39.608 "accel_error_inject_error", 00:07:39.608 "ioat_scan_accel_module", 00:07:39.608 "dsa_scan_accel_module", 00:07:39.608 "iaa_scan_accel_module", 00:07:39.608 "dpdk_cryptodev_get_driver", 00:07:39.608 "dpdk_cryptodev_set_driver", 00:07:39.608 "dpdk_cryptodev_scan_accel_module", 00:07:39.608 "compressdev_scan_accel_module", 00:07:39.608 "keyring_file_remove_key", 00:07:39.608 "keyring_file_add_key", 00:07:39.608 "keyring_linux_set_options", 00:07:39.608 "iscsi_get_histogram", 00:07:39.608 "iscsi_enable_histogram", 00:07:39.608 "iscsi_set_options", 00:07:39.608 "iscsi_get_auth_groups", 00:07:39.608 "iscsi_auth_group_remove_secret", 00:07:39.608 "iscsi_auth_group_add_secret", 00:07:39.608 "iscsi_delete_auth_group", 00:07:39.608 "iscsi_create_auth_group", 00:07:39.608 "iscsi_set_discovery_auth", 00:07:39.608 "iscsi_get_options", 00:07:39.608 "iscsi_target_node_request_logout", 00:07:39.608 "iscsi_target_node_set_redirect", 00:07:39.608 "iscsi_target_node_set_auth", 00:07:39.608 "iscsi_target_node_add_lun", 00:07:39.608 "iscsi_get_stats", 00:07:39.608 "iscsi_get_connections", 00:07:39.608 "iscsi_portal_group_set_auth", 00:07:39.608 "iscsi_start_portal_group", 00:07:39.608 "iscsi_delete_portal_group", 00:07:39.608 "iscsi_create_portal_group", 00:07:39.608 "iscsi_get_portal_groups", 00:07:39.608 "iscsi_delete_target_node", 00:07:39.608 "iscsi_target_node_remove_pg_ig_maps", 00:07:39.608 "iscsi_target_node_add_pg_ig_maps", 00:07:39.608 "iscsi_create_target_node", 00:07:39.608 "iscsi_get_target_nodes", 00:07:39.608 "iscsi_delete_initiator_group", 00:07:39.608 "iscsi_initiator_group_remove_initiators", 00:07:39.608 "iscsi_initiator_group_add_initiators", 00:07:39.608 "iscsi_create_initiator_group", 00:07:39.608 "iscsi_get_initiator_groups", 00:07:39.608 "nvmf_set_crdt", 00:07:39.608 "nvmf_set_config", 00:07:39.608 "nvmf_set_max_subsystems", 00:07:39.608 "nvmf_stop_mdns_prr", 00:07:39.608 "nvmf_publish_mdns_prr", 00:07:39.608 "nvmf_subsystem_get_listeners", 00:07:39.608 "nvmf_subsystem_get_qpairs", 00:07:39.608 "nvmf_subsystem_get_controllers", 00:07:39.608 "nvmf_get_stats", 00:07:39.608 "nvmf_get_transports", 00:07:39.608 "nvmf_create_transport", 00:07:39.608 "nvmf_get_targets", 00:07:39.608 "nvmf_delete_target", 00:07:39.608 "nvmf_create_target", 00:07:39.608 "nvmf_subsystem_allow_any_host", 00:07:39.608 "nvmf_subsystem_remove_host", 00:07:39.608 "nvmf_subsystem_add_host", 00:07:39.608 "nvmf_ns_remove_host", 00:07:39.608 "nvmf_ns_add_host", 00:07:39.608 "nvmf_subsystem_remove_ns", 00:07:39.608 "nvmf_subsystem_add_ns", 00:07:39.608 "nvmf_subsystem_listener_set_ana_state", 00:07:39.608 "nvmf_discovery_get_referrals", 00:07:39.608 "nvmf_discovery_remove_referral", 00:07:39.608 "nvmf_discovery_add_referral", 00:07:39.608 "nvmf_subsystem_remove_listener", 00:07:39.608 "nvmf_subsystem_add_listener", 00:07:39.608 "nvmf_delete_subsystem", 00:07:39.608 "nvmf_create_subsystem", 00:07:39.608 "nvmf_get_subsystems", 00:07:39.608 "env_dpdk_get_mem_stats", 00:07:39.608 "nbd_get_disks", 00:07:39.608 "nbd_stop_disk", 00:07:39.608 "nbd_start_disk", 00:07:39.608 "ublk_recover_disk", 00:07:39.608 "ublk_get_disks", 00:07:39.608 "ublk_stop_disk", 00:07:39.608 "ublk_start_disk", 00:07:39.608 "ublk_destroy_target", 00:07:39.608 "ublk_create_target", 00:07:39.608 "virtio_blk_create_transport", 00:07:39.608 "virtio_blk_get_transports", 00:07:39.608 "vhost_controller_set_coalescing", 00:07:39.608 "vhost_get_controllers", 00:07:39.608 "vhost_delete_controller", 00:07:39.608 "vhost_create_blk_controller", 00:07:39.608 "vhost_scsi_controller_remove_target", 00:07:39.608 "vhost_scsi_controller_add_target", 00:07:39.608 "vhost_start_scsi_controller", 00:07:39.608 "vhost_create_scsi_controller", 00:07:39.608 "thread_set_cpumask", 00:07:39.608 "framework_get_governor", 00:07:39.608 "framework_get_scheduler", 00:07:39.608 "framework_set_scheduler", 00:07:39.608 "framework_get_reactors", 00:07:39.608 "thread_get_io_channels", 00:07:39.608 "thread_get_pollers", 00:07:39.608 "thread_get_stats", 00:07:39.608 "framework_monitor_context_switch", 00:07:39.608 "spdk_kill_instance", 00:07:39.608 "log_enable_timestamps", 00:07:39.608 "log_get_flags", 00:07:39.608 "log_clear_flag", 00:07:39.608 "log_set_flag", 00:07:39.608 "log_get_level", 00:07:39.608 "log_set_level", 00:07:39.608 "log_get_print_level", 00:07:39.608 "log_set_print_level", 00:07:39.608 "framework_enable_cpumask_locks", 00:07:39.608 "framework_disable_cpumask_locks", 00:07:39.608 "framework_wait_init", 00:07:39.608 "framework_start_init", 00:07:39.608 "scsi_get_devices", 00:07:39.608 "bdev_get_histogram", 00:07:39.608 "bdev_enable_histogram", 00:07:39.608 "bdev_set_qos_limit", 00:07:39.608 "bdev_set_qd_sampling_period", 00:07:39.608 "bdev_get_bdevs", 00:07:39.608 "bdev_reset_iostat", 00:07:39.608 "bdev_get_iostat", 00:07:39.608 "bdev_examine", 00:07:39.608 "bdev_wait_for_examine", 00:07:39.608 "bdev_set_options", 00:07:39.608 "notify_get_notifications", 00:07:39.608 "notify_get_types", 00:07:39.608 "accel_get_stats", 00:07:39.608 "accel_set_options", 00:07:39.608 "accel_set_driver", 00:07:39.608 "accel_crypto_key_destroy", 00:07:39.608 "accel_crypto_keys_get", 00:07:39.608 "accel_crypto_key_create", 00:07:39.608 "accel_assign_opc", 00:07:39.608 "accel_get_module_info", 00:07:39.608 "accel_get_opc_assignments", 00:07:39.609 "vmd_rescan", 00:07:39.609 "vmd_remove_device", 00:07:39.609 "vmd_enable", 00:07:39.609 "sock_get_default_impl", 00:07:39.609 "sock_set_default_impl", 00:07:39.609 "sock_impl_set_options", 00:07:39.609 "sock_impl_get_options", 00:07:39.609 "iobuf_get_stats", 00:07:39.609 "iobuf_set_options", 00:07:39.609 "framework_get_pci_devices", 00:07:39.609 "framework_get_config", 00:07:39.609 "framework_get_subsystems", 00:07:39.609 "trace_get_info", 00:07:39.609 "trace_get_tpoint_group_mask", 00:07:39.609 "trace_disable_tpoint_group", 00:07:39.609 "trace_enable_tpoint_group", 00:07:39.609 "trace_clear_tpoint_mask", 00:07:39.609 "trace_set_tpoint_mask", 00:07:39.609 "keyring_get_keys", 00:07:39.609 "spdk_get_version", 00:07:39.609 "rpc_get_methods" 00:07:39.609 ] 00:07:39.609 13:07:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:39.609 13:07:49 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.609 13:07:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.609 13:07:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:39.609 13:07:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 782556 00:07:39.609 13:07:50 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 782556 ']' 00:07:39.609 13:07:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 782556 00:07:39.609 13:07:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:39.609 13:07:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.609 13:07:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 782556 00:07:39.868 13:07:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.868 13:07:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.868 13:07:50 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 782556' 00:07:39.868 killing process with pid 782556 00:07:39.868 13:07:50 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 782556 00:07:39.868 13:07:50 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 782556 00:07:40.126 00:07:40.126 real 0m1.738s 00:07:40.126 user 0m3.147s 00:07:40.126 sys 0m0.578s 00:07:40.126 13:07:50 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.126 13:07:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.126 ************************************ 00:07:40.126 END TEST spdkcli_tcp 00:07:40.126 ************************************ 00:07:40.126 13:07:50 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:40.126 13:07:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.126 13:07:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.126 13:07:50 -- common/autotest_common.sh@10 -- # set +x 00:07:40.127 ************************************ 00:07:40.127 START TEST dpdk_mem_utility 00:07:40.127 ************************************ 00:07:40.127 13:07:50 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:40.386 * Looking for test storage... 00:07:40.386 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:07:40.386 13:07:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:40.386 13:07:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=783042 00:07:40.386 13:07:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 783042 00:07:40.386 13:07:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:40.386 13:07:50 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 783042 ']' 00:07:40.386 13:07:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.386 13:07:50 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.386 13:07:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.386 13:07:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.386 13:07:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:40.386 [2024-07-25 13:07:50.694530] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:40.386 [2024-07-25 13:07:50.694595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783042 ] 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:40.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.386 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:40.386 [2024-07-25 13:07:50.817863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.645 [2024-07-25 13:07:50.903637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.211 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.211 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:41.211 13:07:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:41.211 13:07:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:41.211 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.211 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:41.211 { 00:07:41.211 "filename": "/tmp/spdk_mem_dump.txt" 00:07:41.211 } 00:07:41.211 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.211 13:07:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:41.211 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:41.211 1 heaps totaling size 814.000000 MiB 00:07:41.211 size: 814.000000 MiB heap id: 0 00:07:41.211 end heaps---------- 00:07:41.211 8 mempools totaling size 598.116089 MiB 00:07:41.211 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:41.211 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:41.211 size: 84.521057 MiB name: bdev_io_783042 00:07:41.211 size: 51.011292 MiB name: evtpool_783042 00:07:41.211 size: 50.003479 MiB name: msgpool_783042 00:07:41.211 size: 21.763794 MiB name: PDU_Pool 00:07:41.211 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:41.211 size: 0.026123 MiB name: Session_Pool 00:07:41.211 end mempools------- 00:07:41.211 201 memzones totaling size 4.176453 MiB 00:07:41.212 size: 1.000366 MiB name: RG_ring_0_783042 00:07:41.212 size: 1.000366 MiB name: RG_ring_1_783042 00:07:41.212 size: 1.000366 MiB name: RG_ring_4_783042 00:07:41.212 size: 1.000366 MiB name: RG_ring_5_783042 00:07:41.212 size: 0.125366 MiB name: RG_ring_2_783042 00:07:41.212 size: 0.015991 MiB name: RG_ring_3_783042 00:07:41.212 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:07:41.212 size: 0.000305 MiB name: 0000:1a:01.0_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:01.1_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:01.2_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:01.3_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:01.4_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:01.5_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:01.6_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:01.7_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:02.0_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:02.1_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:02.2_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:02.3_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:02.4_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:02.5_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:02.6_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1a:02.7_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:01.0_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:01.1_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:01.2_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:01.3_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:01.4_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:01.5_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:01.6_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:01.7_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:02.0_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:02.1_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:02.2_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:02.3_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:02.4_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:02.5_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:02.6_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1c:02.7_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:01.0_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:01.1_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:01.2_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:01.3_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:01.4_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:01.5_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:01.6_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:01.7_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:02.0_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:02.1_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:02.2_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:02.3_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:02.4_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:02.5_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:02.6_qat 00:07:41.212 size: 0.000305 MiB name: 0000:1e:02.7_qat 00:07:41.212 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_0 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_1 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_0 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_2 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_3 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_1 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_4 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_5 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_2 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_6 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_7 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_3 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_8 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_9 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_4 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_10 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_11 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_5 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_12 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_13 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_6 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_14 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_15 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_7 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_16 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_17 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_8 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_18 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_19 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_9 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_20 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_21 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_10 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_22 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_23 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_11 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_24 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_25 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_12 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_26 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_27 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_13 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_28 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_29 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_14 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_30 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_31 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_15 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_32 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_33 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_16 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_34 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_35 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_17 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_36 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_37 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_18 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_38 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_39 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_19 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_40 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_41 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_20 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_42 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_43 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_21 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_44 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_45 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_22 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_46 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_47 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_23 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_48 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_49 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_24 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_50 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_51 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_25 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_52 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_53 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_26 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_54 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_55 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_27 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_56 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_57 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_28 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_58 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_59 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_29 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_60 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_61 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_30 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_62 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_63 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_31 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_64 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_65 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_32 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_66 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_67 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_33 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_68 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_69 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_34 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_70 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_71 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_35 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_72 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_73 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_36 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_74 00:07:41.212 size: 0.000122 MiB name: rte_cryptodev_data_75 00:07:41.212 size: 0.000122 MiB name: rte_compressdev_data_37 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_76 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_77 00:07:41.213 size: 0.000122 MiB name: rte_compressdev_data_38 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_78 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_79 00:07:41.213 size: 0.000122 MiB name: rte_compressdev_data_39 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_80 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_81 00:07:41.213 size: 0.000122 MiB name: rte_compressdev_data_40 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_82 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_83 00:07:41.213 size: 0.000122 MiB name: rte_compressdev_data_41 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_84 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_85 00:07:41.213 size: 0.000122 MiB name: rte_compressdev_data_42 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_86 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_87 00:07:41.213 size: 0.000122 MiB name: rte_compressdev_data_43 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_88 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_89 00:07:41.213 size: 0.000122 MiB name: rte_compressdev_data_44 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_90 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_91 00:07:41.213 size: 0.000122 MiB name: rte_compressdev_data_45 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_92 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_93 00:07:41.213 size: 0.000122 MiB name: rte_compressdev_data_46 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_94 00:07:41.213 size: 0.000122 MiB name: rte_cryptodev_data_95 00:07:41.213 size: 0.000122 MiB name: rte_compressdev_data_47 00:07:41.213 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:07:41.213 end memzones------- 00:07:41.213 13:07:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:41.473 heap id: 0 total size: 814.000000 MiB number of busy elements: 637 number of free elements: 14 00:07:41.473 list of free elements. size: 11.781372 MiB 00:07:41.473 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:41.473 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:41.473 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:41.473 element at address: 0x200003e00000 with size: 0.996460 MiB 00:07:41.473 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:41.473 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:41.473 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:41.473 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:41.473 element at address: 0x20001aa00000 with size: 0.564941 MiB 00:07:41.473 element at address: 0x200003a00000 with size: 0.494141 MiB 00:07:41.473 element at address: 0x20000b200000 with size: 0.489075 MiB 00:07:41.473 element at address: 0x200000800000 with size: 0.486511 MiB 00:07:41.473 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:41.473 element at address: 0x200027e00000 with size: 0.395752 MiB 00:07:41.473 list of standard malloc elements. size: 199.898621 MiB 00:07:41.473 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:41.473 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:41.473 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:41.473 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:41.473 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:41.473 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:41.473 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:41.473 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:41.473 element at address: 0x20000032bc80 with size: 0.004395 MiB 00:07:41.473 element at address: 0x20000032f740 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000333200 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000336cc0 with size: 0.004395 MiB 00:07:41.474 element at address: 0x20000033a780 with size: 0.004395 MiB 00:07:41.474 element at address: 0x20000033e240 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000341d00 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003457c0 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000349280 with size: 0.004395 MiB 00:07:41.474 element at address: 0x20000034cd40 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000350800 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003542c0 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000357d80 with size: 0.004395 MiB 00:07:41.474 element at address: 0x20000035b840 with size: 0.004395 MiB 00:07:41.474 element at address: 0x20000035f300 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000362dc0 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000366880 with size: 0.004395 MiB 00:07:41.474 element at address: 0x20000036a340 with size: 0.004395 MiB 00:07:41.474 element at address: 0x20000036de00 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003718c0 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000375380 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000378e40 with size: 0.004395 MiB 00:07:41.474 element at address: 0x20000037c900 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003803c0 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000383e80 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000387940 with size: 0.004395 MiB 00:07:41.474 element at address: 0x20000038b400 with size: 0.004395 MiB 00:07:41.474 element at address: 0x20000038eec0 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000392980 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000396440 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000399f00 with size: 0.004395 MiB 00:07:41.474 element at address: 0x20000039d9c0 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003a1480 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003a4f40 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003a8a00 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003ac4c0 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003aff80 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003b3a40 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003b7500 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003bafc0 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003bea80 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003c2540 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003c6000 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003c9ac0 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003cd580 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003d1040 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003d4b00 with size: 0.004395 MiB 00:07:41.474 element at address: 0x2000003d8d00 with size: 0.004395 MiB 00:07:41.474 element at address: 0x200000329b80 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000032ac00 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000032d640 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000032e6c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000331100 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000332180 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000334bc0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000335c40 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000338680 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000339700 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000033c140 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000033d1c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000033fc00 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000340c80 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003436c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000344740 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000347180 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000348200 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000034ac40 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000034bcc0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000034e700 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000034f780 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003521c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000353240 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000355c80 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000356d00 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000359740 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000035a7c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000035d200 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000035e280 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000360cc0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000361d40 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000364780 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000365800 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000368240 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003692c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000036bd00 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000036cd80 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000036f7c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000370840 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000373280 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000374300 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000376d40 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000377dc0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000037a800 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000037b880 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000037e2c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000037f340 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000381d80 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000382e00 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000385840 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003868c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000389300 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000038a380 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000038cdc0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000038de40 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000390880 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000391900 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000394340 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003953c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000397e00 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000398e80 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000039b8c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000039c940 with size: 0.004028 MiB 00:07:41.474 element at address: 0x20000039f380 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003a0400 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003a2e40 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003a3ec0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003a6900 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003a7980 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003aa3c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003ab440 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003ade80 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003aef00 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003b1940 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003b29c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003b5400 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003b6480 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003b8ec0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003b9f40 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003bc980 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003bda00 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003c0440 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003c14c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003c3f00 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003c4f80 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003c79c0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003c8a40 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003cb480 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003cc500 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003cef40 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003cffc0 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003d2a00 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003d3a80 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003d6c00 with size: 0.004028 MiB 00:07:41.474 element at address: 0x2000003d7c80 with size: 0.004028 MiB 00:07:41.474 element at address: 0x200000200000 with size: 0.000305 MiB 00:07:41.474 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:41.474 element at address: 0x200000200140 with size: 0.000183 MiB 00:07:41.474 element at address: 0x200000200200 with size: 0.000183 MiB 00:07:41.474 element at address: 0x2000002002c0 with size: 0.000183 MiB 00:07:41.474 element at address: 0x200000200380 with size: 0.000183 MiB 00:07:41.474 element at address: 0x200000200440 with size: 0.000183 MiB 00:07:41.474 element at address: 0x200000200500 with size: 0.000183 MiB 00:07:41.474 element at address: 0x2000002005c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000200680 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000200740 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000200800 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000002008c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000200980 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000200a40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000200b00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000200bc0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000200c80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000200d40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000200e00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000201000 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000002052c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225580 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225640 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225700 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000002257c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225880 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225940 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225a00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225ac0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225b80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225c40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225d00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225dc0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225e80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000225f40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226000 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000002260c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226180 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226240 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226300 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226500 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000002265c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226680 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226740 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226800 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000002268c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226980 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226a40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226b00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226bc0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226c80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226d40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226e00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226ec0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000226f80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000227040 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000227100 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000329300 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000003293c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000329580 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000329640 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000329800 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000032ce80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000032d040 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000032d100 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000032d2c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000330940 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000330b00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000330bc0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000330d80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000334400 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000003345c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000334680 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000334840 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000337ec0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000338080 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000338140 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000338300 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000033b980 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000033bb40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000033bc00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000033bdc0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000033f440 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000033f600 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000033f6c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000033f880 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000342f00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000003430c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000343180 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000343340 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000003469c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000346b80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000346c40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000346e00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000034a480 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000034a640 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000034a700 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000034a8c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000034df40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000034e100 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000034e1c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000034e380 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000351a00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000351bc0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000351c80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000351e40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000003554c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000355680 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000355740 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000355900 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000358f80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000359140 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000359200 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000003593c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000035ca40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000035cc00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000035ccc0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000035ce80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000360500 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000003606c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000360780 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000360940 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000363fc0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000364180 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000364240 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000364400 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000367a80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000367c40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000367d00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000367ec0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000036b540 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000036b700 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000036b7c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000036b980 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000036f000 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000036f1c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000036f280 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000036f440 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000372ac0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000372c80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000372d40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000372f00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000376580 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000376740 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000376800 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000003769c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000037a040 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000037a200 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000037a2c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000037a480 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000037db00 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000037dcc0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000037dd80 with size: 0.000183 MiB 00:07:41.475 element at address: 0x20000037df40 with size: 0.000183 MiB 00:07:41.475 element at address: 0x2000003815c0 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000381780 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000381840 with size: 0.000183 MiB 00:07:41.475 element at address: 0x200000381a00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000385080 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000385240 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000385300 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003854c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000388b40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000388d00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000388dc0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000388f80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000038c600 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000038c7c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000038c880 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000038ca40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003900c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000390280 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000390340 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000390500 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000393b80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000393d40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000393e00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000393fc0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000397640 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000397800 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003978c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200000397a80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000039b100 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000039b2c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000039b380 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000039b540 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000039ebc0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000039ed80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000039ee40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000039f000 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003a2680 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003a2840 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003a2900 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003a2ac0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003a6140 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003a6300 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003a63c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003a6580 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003a9c00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003a9dc0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003a9e80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003aa040 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003ad6c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003ad880 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003ad940 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003adb00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003b1180 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003b1340 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003b1400 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003b15c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003b4c40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003b4e00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003b4ec0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003b5080 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003b8700 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003b88c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003b8980 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003b8b40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003bc1c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003bc380 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003bc440 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003bc600 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003bfc80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003bfe40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003bff00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003c00c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003c3740 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003c3900 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003c39c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003c3b80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003c7200 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003c73c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003c7480 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003c7640 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003cacc0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003cae80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003caf40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003cb100 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003ce780 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003ce940 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003cea00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003cebc0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003d2240 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003d2400 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003d24c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003d2680 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003d5dc0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003d64c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003d6580 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000003d6880 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000087c980 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7e800 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7e8c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7e980 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7ea40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7eb00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7ebc0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7ec80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7ed40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7ee00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7eec0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7ef80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f040 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f100 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f1c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f280 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f340 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f400 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f4c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f580 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f640 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f700 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f7c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f880 with size: 0.000183 MiB 00:07:41.476 element at address: 0x200003a7f940 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20001aa90a00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20001aa90ac0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20001aa90b80 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20001aa90c40 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20001aa90d00 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20001aa90dc0 with size: 0.000183 MiB 00:07:41.476 element at address: 0x20001aa90e80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa90f40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91000 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa910c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91180 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91240 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91300 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:41.477 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e65500 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:07:41.477 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:41.478 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:41.478 list of memzone associated elements. size: 602.320007 MiB 00:07:41.478 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:41.478 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:41.478 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:41.478 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:41.478 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:41.478 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_783042_0 00:07:41.478 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:41.478 associated memzone info: size: 48.002930 MiB name: MP_evtpool_783042_0 00:07:41.478 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:41.478 associated memzone info: size: 48.002930 MiB name: MP_msgpool_783042_0 00:07:41.478 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:41.478 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:41.478 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:41.478 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:41.478 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:41.478 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_783042 00:07:41.478 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:41.478 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_783042 00:07:41.478 element at address: 0x2000002271c0 with size: 1.008118 MiB 00:07:41.478 associated memzone info: size: 1.007996 MiB name: MP_evtpool_783042 00:07:41.478 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:41.478 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:41.478 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:41.478 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:41.478 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:41.478 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:41.478 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:41.478 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:41.478 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:41.478 associated memzone info: size: 1.000366 MiB name: RG_ring_0_783042 00:07:41.478 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:41.478 associated memzone info: size: 1.000366 MiB name: RG_ring_1_783042 00:07:41.478 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:41.478 associated memzone info: size: 1.000366 MiB name: RG_ring_4_783042 00:07:41.478 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:41.478 associated memzone info: size: 1.000366 MiB name: RG_ring_5_783042 00:07:41.478 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:07:41.478 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_783042 00:07:41.478 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:41.478 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:41.478 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:41.478 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:41.478 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:41.478 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:41.478 element at address: 0x200000205380 with size: 0.125488 MiB 00:07:41.478 associated memzone info: size: 0.125366 MiB name: RG_ring_2_783042 00:07:41.478 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:41.478 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:41.478 element at address: 0x200027e65680 with size: 0.023743 MiB 00:07:41.478 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:41.478 element at address: 0x2000002010c0 with size: 0.016113 MiB 00:07:41.478 associated memzone info: size: 0.015991 MiB name: RG_ring_3_783042 00:07:41.478 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:07:41.478 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:41.478 element at address: 0x2000003d5f80 with size: 0.001282 MiB 00:07:41.478 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:07:41.478 element at address: 0x2000003d6a40 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.0_qat 00:07:41.478 element at address: 0x2000003d2840 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.1_qat 00:07:41.478 element at address: 0x2000003ced80 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.2_qat 00:07:41.478 element at address: 0x2000003cb2c0 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.3_qat 00:07:41.478 element at address: 0x2000003c7800 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.4_qat 00:07:41.478 element at address: 0x2000003c3d40 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.5_qat 00:07:41.478 element at address: 0x2000003c0280 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.6_qat 00:07:41.478 element at address: 0x2000003bc7c0 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.7_qat 00:07:41.478 element at address: 0x2000003b8d00 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.0_qat 00:07:41.478 element at address: 0x2000003b5240 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.1_qat 00:07:41.478 element at address: 0x2000003b1780 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.2_qat 00:07:41.478 element at address: 0x2000003adcc0 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.3_qat 00:07:41.478 element at address: 0x2000003aa200 with size: 0.000427 MiB 00:07:41.478 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.4_qat 00:07:41.478 element at address: 0x2000003a6740 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.5_qat 00:07:41.479 element at address: 0x2000003a2c80 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.6_qat 00:07:41.479 element at address: 0x20000039f1c0 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.7_qat 00:07:41.479 element at address: 0x20000039b700 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.0_qat 00:07:41.479 element at address: 0x200000397c40 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.1_qat 00:07:41.479 element at address: 0x200000394180 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.2_qat 00:07:41.479 element at address: 0x2000003906c0 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.3_qat 00:07:41.479 element at address: 0x20000038cc00 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.4_qat 00:07:41.479 element at address: 0x200000389140 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.5_qat 00:07:41.479 element at address: 0x200000385680 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.6_qat 00:07:41.479 element at address: 0x200000381bc0 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.7_qat 00:07:41.479 element at address: 0x20000037e100 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.0_qat 00:07:41.479 element at address: 0x20000037a640 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.1_qat 00:07:41.479 element at address: 0x200000376b80 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.2_qat 00:07:41.479 element at address: 0x2000003730c0 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.3_qat 00:07:41.479 element at address: 0x20000036f600 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.4_qat 00:07:41.479 element at address: 0x20000036bb40 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.5_qat 00:07:41.479 element at address: 0x200000368080 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.6_qat 00:07:41.479 element at address: 0x2000003645c0 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.7_qat 00:07:41.479 element at address: 0x200000360b00 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.0_qat 00:07:41.479 element at address: 0x20000035d040 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.1_qat 00:07:41.479 element at address: 0x200000359580 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.2_qat 00:07:41.479 element at address: 0x200000355ac0 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.3_qat 00:07:41.479 element at address: 0x200000352000 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.4_qat 00:07:41.479 element at address: 0x20000034e540 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.5_qat 00:07:41.479 element at address: 0x20000034aa80 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.6_qat 00:07:41.479 element at address: 0x200000346fc0 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.7_qat 00:07:41.479 element at address: 0x200000343500 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.0_qat 00:07:41.479 element at address: 0x20000033fa40 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.1_qat 00:07:41.479 element at address: 0x20000033bf80 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.2_qat 00:07:41.479 element at address: 0x2000003384c0 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.3_qat 00:07:41.479 element at address: 0x200000334a00 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.4_qat 00:07:41.479 element at address: 0x200000330f40 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.5_qat 00:07:41.479 element at address: 0x20000032d480 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.6_qat 00:07:41.479 element at address: 0x2000003299c0 with size: 0.000427 MiB 00:07:41.479 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.7_qat 00:07:41.479 element at address: 0x2000003d6740 with size: 0.000305 MiB 00:07:41.479 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:07:41.479 element at address: 0x2000002263c0 with size: 0.000305 MiB 00:07:41.479 associated memzone info: size: 0.000183 MiB name: MP_msgpool_783042 00:07:41.479 element at address: 0x200000200ec0 with size: 0.000305 MiB 00:07:41.479 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_783042 00:07:41.479 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:07:41.479 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:41.479 element at address: 0x2000003d6940 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:07:41.479 element at address: 0x2000003d6640 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:07:41.479 element at address: 0x2000003d5e80 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:07:41.479 element at address: 0x2000003d2740 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:07:41.479 element at address: 0x2000003d2580 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:07:41.479 element at address: 0x2000003d2300 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:07:41.479 element at address: 0x2000003cec80 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:07:41.479 element at address: 0x2000003ceac0 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:07:41.479 element at address: 0x2000003ce840 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:07:41.479 element at address: 0x2000003cb1c0 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:07:41.479 element at address: 0x2000003cb000 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:07:41.479 element at address: 0x2000003cad80 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:07:41.479 element at address: 0x2000003c7700 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:07:41.479 element at address: 0x2000003c7540 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:07:41.479 element at address: 0x2000003c72c0 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:07:41.479 element at address: 0x2000003c3c40 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:07:41.479 element at address: 0x2000003c3a80 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:07:41.479 element at address: 0x2000003c3800 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:07:41.479 element at address: 0x2000003c0180 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:07:41.479 element at address: 0x2000003bffc0 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:07:41.479 element at address: 0x2000003bfd40 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:07:41.479 element at address: 0x2000003bc6c0 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:07:41.479 element at address: 0x2000003bc500 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:07:41.479 element at address: 0x2000003bc280 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:07:41.479 element at address: 0x2000003b8c00 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:07:41.479 element at address: 0x2000003b8a40 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:07:41.479 element at address: 0x2000003b87c0 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:07:41.479 element at address: 0x2000003b5140 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:07:41.479 element at address: 0x2000003b4f80 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:07:41.479 element at address: 0x2000003b4d00 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:07:41.479 element at address: 0x2000003b1680 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:07:41.479 element at address: 0x2000003b14c0 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:07:41.479 element at address: 0x2000003b1240 with size: 0.000244 MiB 00:07:41.479 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:07:41.479 element at address: 0x2000003adbc0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:07:41.480 element at address: 0x2000003ada00 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:07:41.480 element at address: 0x2000003ad780 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:07:41.480 element at address: 0x2000003aa100 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:07:41.480 element at address: 0x2000003a9f40 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:07:41.480 element at address: 0x2000003a9cc0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:07:41.480 element at address: 0x2000003a6640 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:07:41.480 element at address: 0x2000003a6480 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:07:41.480 element at address: 0x2000003a6200 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:07:41.480 element at address: 0x2000003a2b80 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:07:41.480 element at address: 0x2000003a29c0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:07:41.480 element at address: 0x2000003a2740 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:07:41.480 element at address: 0x20000039f0c0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:07:41.480 element at address: 0x20000039ef00 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:07:41.480 element at address: 0x20000039ec80 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:07:41.480 element at address: 0x20000039b600 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:07:41.480 element at address: 0x20000039b440 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:07:41.480 element at address: 0x20000039b1c0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:07:41.480 element at address: 0x200000397b40 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:07:41.480 element at address: 0x200000397980 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:07:41.480 element at address: 0x200000397700 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:07:41.480 element at address: 0x200000394080 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:07:41.480 element at address: 0x200000393ec0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:07:41.480 element at address: 0x200000393c40 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:07:41.480 element at address: 0x2000003905c0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:07:41.480 element at address: 0x200000390400 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:07:41.480 element at address: 0x200000390180 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:07:41.480 element at address: 0x20000038cb00 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:07:41.480 element at address: 0x20000038c940 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:07:41.480 element at address: 0x20000038c6c0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:07:41.480 element at address: 0x200000389040 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:07:41.480 element at address: 0x200000388e80 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:07:41.480 element at address: 0x200000388c00 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:07:41.480 element at address: 0x200000385580 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:07:41.480 element at address: 0x2000003853c0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:07:41.480 element at address: 0x200000385140 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:07:41.480 element at address: 0x200000381ac0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:07:41.480 element at address: 0x200000381900 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:07:41.480 element at address: 0x200000381680 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:07:41.480 element at address: 0x20000037e000 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:07:41.480 element at address: 0x20000037de40 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:07:41.480 element at address: 0x20000037dbc0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:07:41.480 element at address: 0x20000037a540 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:07:41.480 element at address: 0x20000037a380 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:07:41.480 element at address: 0x20000037a100 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:07:41.480 element at address: 0x200000376a80 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:07:41.480 element at address: 0x2000003768c0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:07:41.480 element at address: 0x200000376640 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:07:41.480 element at address: 0x200000372fc0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:07:41.480 element at address: 0x200000372e00 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:07:41.480 element at address: 0x200000372b80 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:07:41.480 element at address: 0x20000036f500 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:07:41.480 element at address: 0x20000036f340 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:07:41.480 element at address: 0x20000036f0c0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:07:41.480 element at address: 0x20000036ba40 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:07:41.480 element at address: 0x20000036b880 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:07:41.480 element at address: 0x20000036b600 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:07:41.480 element at address: 0x200000367f80 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:07:41.480 element at address: 0x200000367dc0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:07:41.480 element at address: 0x200000367b40 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:07:41.480 element at address: 0x2000003644c0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:07:41.480 element at address: 0x200000364300 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:07:41.480 element at address: 0x200000364080 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:07:41.480 element at address: 0x200000360a00 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:07:41.480 element at address: 0x200000360840 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:07:41.480 element at address: 0x2000003605c0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:07:41.480 element at address: 0x20000035cf40 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:07:41.480 element at address: 0x20000035cd80 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:07:41.480 element at address: 0x20000035cb00 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:07:41.480 element at address: 0x200000359480 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:07:41.480 element at address: 0x2000003592c0 with size: 0.000244 MiB 00:07:41.480 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:07:41.481 element at address: 0x200000359040 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:07:41.481 element at address: 0x2000003559c0 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:07:41.481 element at address: 0x200000355800 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:07:41.481 element at address: 0x200000355580 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:07:41.481 element at address: 0x200000351f00 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:07:41.481 element at address: 0x200000351d40 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:07:41.481 element at address: 0x200000351ac0 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:07:41.481 element at address: 0x20000034e440 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:07:41.481 element at address: 0x20000034e280 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:07:41.481 element at address: 0x20000034e000 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:07:41.481 element at address: 0x20000034a980 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:07:41.481 element at address: 0x20000034a7c0 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:07:41.481 element at address: 0x20000034a540 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:07:41.481 element at address: 0x200000346ec0 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:07:41.481 element at address: 0x200000346d00 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:07:41.481 element at address: 0x200000346a80 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:07:41.481 element at address: 0x200000343400 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:07:41.481 element at address: 0x200000343240 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:07:41.481 element at address: 0x200000342fc0 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:07:41.481 element at address: 0x20000033f940 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:07:41.481 element at address: 0x20000033f780 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:07:41.481 element at address: 0x20000033f500 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:07:41.481 element at address: 0x20000033be80 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:07:41.481 element at address: 0x20000033bcc0 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:07:41.481 element at address: 0x20000033ba40 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:07:41.481 element at address: 0x2000003383c0 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:07:41.481 element at address: 0x200000338200 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:07:41.481 element at address: 0x200000337f80 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:07:41.481 element at address: 0x200000334900 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:07:41.481 element at address: 0x200000334740 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:07:41.481 element at address: 0x2000003344c0 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:07:41.481 element at address: 0x200000330e40 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:07:41.481 element at address: 0x200000330c80 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:07:41.481 element at address: 0x200000330a00 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:07:41.481 element at address: 0x20000032d380 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:07:41.481 element at address: 0x20000032d1c0 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:07:41.481 element at address: 0x20000032cf40 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:07:41.481 element at address: 0x2000003298c0 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:07:41.481 element at address: 0x200000329700 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:07:41.481 element at address: 0x200000329480 with size: 0.000244 MiB 00:07:41.481 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:07:41.481 element at address: 0x2000003d5d00 with size: 0.000183 MiB 00:07:41.481 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:07:41.481 13:07:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:41.481 13:07:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 783042 00:07:41.481 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 783042 ']' 00:07:41.481 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 783042 00:07:41.481 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:41.481 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.481 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 783042 00:07:41.481 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.481 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.481 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 783042' 00:07:41.481 killing process with pid 783042 00:07:41.481 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 783042 00:07:41.481 13:07:51 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 783042 00:07:41.740 00:07:41.740 real 0m1.634s 00:07:41.740 user 0m1.751s 00:07:41.740 sys 0m0.528s 00:07:41.740 13:07:52 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.740 13:07:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:41.740 ************************************ 00:07:41.740 END TEST dpdk_mem_utility 00:07:41.740 ************************************ 00:07:41.740 13:07:52 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:07:41.740 13:07:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.740 13:07:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.740 13:07:52 -- common/autotest_common.sh@10 -- # set +x 00:07:41.999 ************************************ 00:07:41.999 START TEST event 00:07:41.999 ************************************ 00:07:41.999 13:07:52 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:07:41.999 * Looking for test storage... 00:07:41.999 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:07:41.999 13:07:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:41.999 13:07:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:41.999 13:07:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:41.999 13:07:52 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:41.999 13:07:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.999 13:07:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:41.999 ************************************ 00:07:41.999 START TEST event_perf 00:07:41.999 ************************************ 00:07:41.999 13:07:52 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:42.000 Running I/O for 1 seconds...[2024-07-25 13:07:52.408236] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:42.000 [2024-07-25 13:07:52.408292] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783392 ] 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:42.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.000 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:42.259 [2024-07-25 13:07:52.540128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.259 [2024-07-25 13:07:52.627150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.259 [2024-07-25 13:07:52.627215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.259 [2024-07-25 13:07:52.627299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.259 [2024-07-25 13:07:52.627303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.634 Running I/O for 1 seconds... 00:07:43.634 lcore 0: 186472 00:07:43.634 lcore 1: 186472 00:07:43.634 lcore 2: 186471 00:07:43.634 lcore 3: 186473 00:07:43.634 done. 00:07:43.634 00:07:43.634 real 0m1.324s 00:07:43.634 user 0m4.177s 00:07:43.634 sys 0m0.142s 00:07:43.634 13:07:53 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.634 13:07:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.634 ************************************ 00:07:43.634 END TEST event_perf 00:07:43.634 ************************************ 00:07:43.634 13:07:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:43.634 13:07:53 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:43.634 13:07:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.634 13:07:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:43.634 ************************************ 00:07:43.634 START TEST event_reactor 00:07:43.634 ************************************ 00:07:43.634 13:07:53 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:43.634 [2024-07-25 13:07:53.811151] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:43.634 [2024-07-25 13:07:53.811211] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783679 ] 00:07:43.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.634 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:43.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.634 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:43.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.634 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:43.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.634 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:43.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.634 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:43.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.634 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:43.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.634 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:43.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.634 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:43.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.635 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:43.635 [2024-07-25 13:07:53.941974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.635 [2024-07-25 13:07:54.024109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.011 test_start 00:07:45.011 oneshot 00:07:45.011 tick 100 00:07:45.011 tick 100 00:07:45.011 tick 250 00:07:45.011 tick 100 00:07:45.011 tick 100 00:07:45.011 tick 250 00:07:45.011 tick 100 00:07:45.011 tick 500 00:07:45.011 tick 100 00:07:45.011 tick 100 00:07:45.011 tick 250 00:07:45.011 tick 100 00:07:45.011 tick 100 00:07:45.011 test_end 00:07:45.011 00:07:45.011 real 0m1.316s 00:07:45.011 user 0m1.176s 00:07:45.011 sys 0m0.135s 00:07:45.011 13:07:55 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.011 13:07:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:45.011 ************************************ 00:07:45.011 END TEST event_reactor 00:07:45.011 ************************************ 00:07:45.011 13:07:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:45.011 13:07:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:45.011 13:07:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.011 13:07:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:45.011 ************************************ 00:07:45.011 START TEST event_reactor_perf 00:07:45.011 ************************************ 00:07:45.011 13:07:55 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:45.011 [2024-07-25 13:07:55.197622] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:45.011 [2024-07-25 13:07:55.197683] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783882 ] 00:07:45.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.011 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:45.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.011 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:45.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.011 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:45.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.011 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:45.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.011 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:45.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.011 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:45.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:45.012 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:45.012 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:45.012 [2024-07-25 13:07:55.316252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.012 [2024-07-25 13:07:55.399109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.389 test_start 00:07:46.389 test_end 00:07:46.389 Performance: 355906 events per second 00:07:46.389 00:07:46.389 real 0m1.303s 00:07:46.389 user 0m1.168s 00:07:46.389 sys 0m0.129s 00:07:46.389 13:07:56 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.389 13:07:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:46.389 ************************************ 00:07:46.389 END TEST event_reactor_perf 00:07:46.389 ************************************ 00:07:46.389 13:07:56 event -- event/event.sh@49 -- # uname -s 00:07:46.389 13:07:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:46.389 13:07:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:46.389 13:07:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.389 13:07:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.389 13:07:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:46.389 ************************************ 00:07:46.389 START TEST event_scheduler 00:07:46.389 ************************************ 00:07:46.389 13:07:56 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:46.389 * Looking for test storage... 00:07:46.389 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:07:46.389 13:07:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:46.389 13:07:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=784153 00:07:46.389 13:07:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:46.389 13:07:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:46.389 13:07:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 784153 00:07:46.389 13:07:56 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 784153 ']' 00:07:46.389 13:07:56 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.389 13:07:56 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.389 13:07:56 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.389 13:07:56 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.389 13:07:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:46.389 [2024-07-25 13:07:56.721013] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:46.389 [2024-07-25 13:07:56.721074] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784153 ] 00:07:46.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.389 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:46.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.389 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:46.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.389 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:46.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:46.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.390 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:46.390 [2024-07-25 13:07:56.825105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.649 [2024-07-25 13:07:56.898336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.649 [2024-07-25 13:07:56.898421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.649 [2024-07-25 13:07:56.898501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.649 [2024-07-25 13:07:56.898503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.216 13:07:57 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.216 13:07:57 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:47.216 13:07:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:47.216 13:07:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.216 13:07:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:47.216 [2024-07-25 13:07:57.637217] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:47.216 [2024-07-25 13:07:57.637236] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:47.216 [2024-07-25 13:07:57.637247] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:47.216 [2024-07-25 13:07:57.637255] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:47.216 [2024-07-25 13:07:57.637262] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:47.216 13:07:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.216 13:07:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:47.216 13:07:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.216 13:07:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:47.475 [2024-07-25 13:07:57.720131] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:47.475 13:07:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.475 13:07:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:47.475 13:07:57 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.475 13:07:57 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.475 13:07:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:47.475 ************************************ 00:07:47.475 START TEST scheduler_create_thread 00:07:47.475 ************************************ 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.475 2 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.475 3 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.475 4 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.475 5 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.475 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 6 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 7 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 8 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 9 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 10 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.476 13:07:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 13:07:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.853 13:07:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:48.853 13:07:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:48.853 13:07:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.853 13:07:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.231 13:08:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.231 00:07:50.231 real 0m2.618s 00:07:50.231 user 0m0.024s 00:07:50.231 sys 0m0.006s 00:07:50.231 13:08:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.231 13:08:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.231 ************************************ 00:07:50.231 END TEST scheduler_create_thread 00:07:50.231 ************************************ 00:07:50.231 13:08:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:50.231 13:08:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 784153 00:07:50.231 13:08:00 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 784153 ']' 00:07:50.231 13:08:00 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 784153 00:07:50.231 13:08:00 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:50.231 13:08:00 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.231 13:08:00 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 784153 00:07:50.231 13:08:00 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:50.231 13:08:00 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:50.231 13:08:00 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 784153' 00:07:50.231 killing process with pid 784153 00:07:50.231 13:08:00 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 784153 00:07:50.231 13:08:00 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 784153 00:07:50.490 [2024-07-25 13:08:00.858370] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:50.749 00:07:50.749 real 0m4.490s 00:07:50.749 user 0m8.571s 00:07:50.749 sys 0m0.489s 00:07:50.749 13:08:01 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.749 13:08:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:50.749 ************************************ 00:07:50.749 END TEST event_scheduler 00:07:50.749 ************************************ 00:07:50.749 13:08:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:50.749 13:08:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:50.749 13:08:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.749 13:08:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.749 13:08:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:50.749 ************************************ 00:07:50.749 START TEST app_repeat 00:07:50.749 ************************************ 00:07:50.749 13:08:01 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=784924 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 784924' 00:07:50.749 Process app_repeat pid: 784924 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:50.749 spdk_app_start Round 0 00:07:50.749 13:08:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 784924 /var/tmp/spdk-nbd.sock 00:07:50.749 13:08:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 784924 ']' 00:07:50.749 13:08:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:50.749 13:08:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.749 13:08:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:50.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:50.749 13:08:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.749 13:08:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.750 [2024-07-25 13:08:01.175458] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:07:50.750 [2024-07-25 13:08:01.175515] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784924 ] 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:51.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.010 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:51.010 [2024-07-25 13:08:01.307406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:51.010 [2024-07-25 13:08:01.397504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.010 [2024-07-25 13:08:01.397510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.946 13:08:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.946 13:08:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:51.946 13:08:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:51.946 Malloc0 00:07:51.946 13:08:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:52.239 Malloc1 00:07:52.239 13:08:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:52.239 13:08:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:52.506 /dev/nbd0 00:07:52.506 13:08:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:52.506 13:08:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:52.506 1+0 records in 00:07:52.506 1+0 records out 00:07:52.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252108 s, 16.2 MB/s 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:52.506 13:08:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:52.506 13:08:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.506 13:08:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:52.506 13:08:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:52.766 /dev/nbd1 00:07:52.766 13:08:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:52.766 13:08:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:52.766 1+0 records in 00:07:52.766 1+0 records out 00:07:52.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274965 s, 14.9 MB/s 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:52.766 13:08:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:52.766 13:08:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.766 13:08:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:52.766 13:08:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:52.766 13:08:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.766 13:08:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:53.025 { 00:07:53.025 "nbd_device": "/dev/nbd0", 00:07:53.025 "bdev_name": "Malloc0" 00:07:53.025 }, 00:07:53.025 { 00:07:53.025 "nbd_device": "/dev/nbd1", 00:07:53.025 "bdev_name": "Malloc1" 00:07:53.025 } 00:07:53.025 ]' 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:53.025 { 00:07:53.025 "nbd_device": "/dev/nbd0", 00:07:53.025 "bdev_name": "Malloc0" 00:07:53.025 }, 00:07:53.025 { 00:07:53.025 "nbd_device": "/dev/nbd1", 00:07:53.025 "bdev_name": "Malloc1" 00:07:53.025 } 00:07:53.025 ]' 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:53.025 /dev/nbd1' 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:53.025 /dev/nbd1' 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:53.025 256+0 records in 00:07:53.025 256+0 records out 00:07:53.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106212 s, 98.7 MB/s 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:53.025 256+0 records in 00:07:53.025 256+0 records out 00:07:53.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175703 s, 59.7 MB/s 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:53.025 13:08:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:53.026 256+0 records in 00:07:53.026 256+0 records out 00:07:53.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184956 s, 56.7 MB/s 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.026 13:08:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:53.283 13:08:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:53.283 13:08:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:53.283 13:08:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:53.283 13:08:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.283 13:08:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.283 13:08:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:53.283 13:08:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:53.283 13:08:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.283 13:08:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.283 13:08:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:53.541 13:08:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:53.541 13:08:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:53.541 13:08:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:53.541 13:08:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.541 13:08:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.541 13:08:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:53.541 13:08:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:53.542 13:08:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.542 13:08:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:53.542 13:08:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.542 13:08:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:53.800 13:08:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:53.800 13:08:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:53.800 13:08:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:53.800 13:08:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:53.800 13:08:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:53.800 13:08:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:53.800 13:08:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:53.800 13:08:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:53.800 13:08:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:53.800 13:08:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:53.800 13:08:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:53.800 13:08:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:53.800 13:08:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:54.058 13:08:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:54.317 [2024-07-25 13:08:04.619433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:54.317 [2024-07-25 13:08:04.701582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.317 [2024-07-25 13:08:04.701588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.317 [2024-07-25 13:08:04.745251] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:54.317 [2024-07-25 13:08:04.745299] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:57.607 13:08:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:57.607 13:08:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:57.607 spdk_app_start Round 1 00:07:57.607 13:08:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 784924 /var/tmp/spdk-nbd.sock 00:07:57.607 13:08:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 784924 ']' 00:07:57.607 13:08:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:57.607 13:08:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.607 13:08:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:57.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:57.607 13:08:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.607 13:08:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:57.607 13:08:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.607 13:08:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:57.607 13:08:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:57.607 Malloc0 00:07:57.607 13:08:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:57.607 Malloc1 00:07:57.607 13:08:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:57.607 13:08:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:57.866 /dev/nbd0 00:07:57.866 13:08:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:57.866 13:08:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:57.866 13:08:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:57.866 13:08:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:57.866 13:08:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:57.866 13:08:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:57.866 13:08:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:57.866 13:08:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:57.866 13:08:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:57.866 13:08:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:57.866 13:08:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:57.866 1+0 records in 00:07:57.866 1+0 records out 00:07:57.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000151597 s, 27.0 MB/s 00:07:57.866 13:08:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:57.866 13:08:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:57.867 13:08:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:57.867 13:08:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:57.867 13:08:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:57.867 13:08:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.867 13:08:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:57.867 13:08:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:58.124 /dev/nbd1 00:07:58.124 13:08:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:58.124 13:08:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:58.124 13:08:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:58.125 1+0 records in 00:07:58.125 1+0 records out 00:07:58.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231478 s, 17.7 MB/s 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:58.125 13:08:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:58.125 13:08:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.125 13:08:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:58.125 13:08:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:58.125 13:08:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.125 13:08:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:58.383 { 00:07:58.383 "nbd_device": "/dev/nbd0", 00:07:58.383 "bdev_name": "Malloc0" 00:07:58.383 }, 00:07:58.383 { 00:07:58.383 "nbd_device": "/dev/nbd1", 00:07:58.383 "bdev_name": "Malloc1" 00:07:58.383 } 00:07:58.383 ]' 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:58.383 { 00:07:58.383 "nbd_device": "/dev/nbd0", 00:07:58.383 "bdev_name": "Malloc0" 00:07:58.383 }, 00:07:58.383 { 00:07:58.383 "nbd_device": "/dev/nbd1", 00:07:58.383 "bdev_name": "Malloc1" 00:07:58.383 } 00:07:58.383 ]' 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:58.383 /dev/nbd1' 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:58.383 /dev/nbd1' 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:58.383 256+0 records in 00:07:58.383 256+0 records out 00:07:58.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113305 s, 92.5 MB/s 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:58.383 256+0 records in 00:07:58.383 256+0 records out 00:07:58.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169526 s, 61.9 MB/s 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:58.383 256+0 records in 00:07:58.383 256+0 records out 00:07:58.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182531 s, 57.4 MB/s 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.383 13:08:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:58.642 13:08:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:58.642 13:08:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:58.642 13:08:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:58.642 13:08:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.642 13:08:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.642 13:08:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:58.642 13:08:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:58.642 13:08:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.642 13:08:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.642 13:08:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:58.901 13:08:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:58.901 13:08:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:58.901 13:08:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:58.901 13:08:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.901 13:08:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.901 13:08:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:58.901 13:08:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:58.901 13:08:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.901 13:08:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:58.901 13:08:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.901 13:08:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:59.159 13:08:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:59.160 13:08:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:59.160 13:08:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:59.160 13:08:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:59.160 13:08:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:59.160 13:08:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:59.160 13:08:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:59.160 13:08:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:59.160 13:08:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:59.160 13:08:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:59.160 13:08:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:59.160 13:08:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:59.160 13:08:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:59.418 13:08:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:59.678 [2024-07-25 13:08:10.124789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.937 [2024-07-25 13:08:10.210082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.937 [2024-07-25 13:08:10.210086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.937 [2024-07-25 13:08:10.255519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:59.937 [2024-07-25 13:08:10.255564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:02.473 13:08:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:02.473 13:08:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:02.473 spdk_app_start Round 2 00:08:02.473 13:08:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 784924 /var/tmp/spdk-nbd.sock 00:08:02.473 13:08:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 784924 ']' 00:08:02.473 13:08:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:02.473 13:08:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.473 13:08:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:02.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:02.473 13:08:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.473 13:08:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:02.732 13:08:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.732 13:08:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:02.732 13:08:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:02.995 Malloc0 00:08:02.995 13:08:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:03.262 Malloc1 00:08:03.262 13:08:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:03.262 13:08:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:03.521 /dev/nbd0 00:08:03.521 13:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:03.521 13:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:03.521 1+0 records in 00:08:03.521 1+0 records out 00:08:03.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024575 s, 16.7 MB/s 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.521 13:08:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:03.521 13:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.521 13:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:03.521 13:08:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:03.781 /dev/nbd1 00:08:03.781 13:08:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:03.781 13:08:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:03.781 1+0 records in 00:08:03.781 1+0 records out 00:08:03.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282916 s, 14.5 MB/s 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.781 13:08:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:03.781 13:08:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.781 13:08:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:03.781 13:08:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.781 13:08:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.781 13:08:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:04.041 { 00:08:04.041 "nbd_device": "/dev/nbd0", 00:08:04.041 "bdev_name": "Malloc0" 00:08:04.041 }, 00:08:04.041 { 00:08:04.041 "nbd_device": "/dev/nbd1", 00:08:04.041 "bdev_name": "Malloc1" 00:08:04.041 } 00:08:04.041 ]' 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:04.041 { 00:08:04.041 "nbd_device": "/dev/nbd0", 00:08:04.041 "bdev_name": "Malloc0" 00:08:04.041 }, 00:08:04.041 { 00:08:04.041 "nbd_device": "/dev/nbd1", 00:08:04.041 "bdev_name": "Malloc1" 00:08:04.041 } 00:08:04.041 ]' 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:04.041 /dev/nbd1' 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:04.041 /dev/nbd1' 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:04.041 256+0 records in 00:08:04.041 256+0 records out 00:08:04.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103824 s, 101 MB/s 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:04.041 256+0 records in 00:08:04.041 256+0 records out 00:08:04.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171248 s, 61.2 MB/s 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:04.041 256+0 records in 00:08:04.041 256+0 records out 00:08:04.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185997 s, 56.4 MB/s 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:04.041 13:08:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.147 13:08:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:04.405 13:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:04.405 13:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:04.405 13:08:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:04.405 13:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.405 13:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.405 13:08:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:04.405 13:08:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:04.405 13:08:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.405 13:08:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.405 13:08:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:04.663 13:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:04.663 13:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:04.663 13:08:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:04.663 13:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.663 13:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.663 13:08:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:04.663 13:08:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:04.663 13:08:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.663 13:08:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.663 13:08:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.663 13:08:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.922 13:08:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:04.922 13:08:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:04.922 13:08:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:04.922 13:08:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:04.922 13:08:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:04.922 13:08:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:04.922 13:08:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:04.922 13:08:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:04.922 13:08:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:04.922 13:08:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:04.922 13:08:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:04.922 13:08:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:04.922 13:08:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:05.183 13:08:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:05.442 [2024-07-25 13:08:15.758416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:05.442 [2024-07-25 13:08:15.838133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.442 [2024-07-25 13:08:15.838146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.442 [2024-07-25 13:08:15.881717] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:05.442 [2024-07-25 13:08:15.881763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:08.731 13:08:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 784924 /var/tmp/spdk-nbd.sock 00:08:08.731 13:08:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 784924 ']' 00:08:08.731 13:08:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:08.731 13:08:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.731 13:08:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:08.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:08.731 13:08:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.731 13:08:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:08.731 13:08:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:08.732 13:08:18 event.app_repeat -- event/event.sh@39 -- # killprocess 784924 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 784924 ']' 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 784924 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 784924 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 784924' 00:08:08.732 killing process with pid 784924 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@969 -- # kill 784924 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@974 -- # wait 784924 00:08:08.732 spdk_app_start is called in Round 0. 00:08:08.732 Shutdown signal received, stop current app iteration 00:08:08.732 Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 reinitialization... 00:08:08.732 spdk_app_start is called in Round 1. 00:08:08.732 Shutdown signal received, stop current app iteration 00:08:08.732 Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 reinitialization... 00:08:08.732 spdk_app_start is called in Round 2. 00:08:08.732 Shutdown signal received, stop current app iteration 00:08:08.732 Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 reinitialization... 00:08:08.732 spdk_app_start is called in Round 3. 00:08:08.732 Shutdown signal received, stop current app iteration 00:08:08.732 13:08:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:08.732 13:08:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:08.732 00:08:08.732 real 0m17.844s 00:08:08.732 user 0m38.431s 00:08:08.732 sys 0m3.537s 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.732 13:08:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:08.732 ************************************ 00:08:08.732 END TEST app_repeat 00:08:08.732 ************************************ 00:08:08.732 13:08:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:08.732 00:08:08.732 real 0m26.795s 00:08:08.732 user 0m53.713s 00:08:08.732 sys 0m4.801s 00:08:08.732 13:08:19 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.732 13:08:19 event -- common/autotest_common.sh@10 -- # set +x 00:08:08.732 ************************************ 00:08:08.732 END TEST event 00:08:08.732 ************************************ 00:08:08.732 13:08:19 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:08:08.732 13:08:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.732 13:08:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.732 13:08:19 -- common/autotest_common.sh@10 -- # set +x 00:08:08.732 ************************************ 00:08:08.732 START TEST thread 00:08:08.732 ************************************ 00:08:08.732 13:08:19 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:08:08.732 * Looking for test storage... 00:08:08.732 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:08:08.732 13:08:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:08.732 13:08:19 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:08.732 13:08:19 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.732 13:08:19 thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.028 ************************************ 00:08:09.028 START TEST thread_poller_perf 00:08:09.028 ************************************ 00:08:09.028 13:08:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:09.028 [2024-07-25 13:08:19.272054] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:09.028 [2024-07-25 13:08:19.272110] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788271 ] 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.028 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:09.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:09.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.029 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:09.029 [2024-07-25 13:08:19.403011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.029 [2024-07-25 13:08:19.486470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.029 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:10.408 ====================================== 00:08:10.408 busy:2514950716 (cyc) 00:08:10.408 total_run_count: 290000 00:08:10.408 tsc_hz: 2500000000 (cyc) 00:08:10.408 ====================================== 00:08:10.408 poller_cost: 8672 (cyc), 3468 (nsec) 00:08:10.408 00:08:10.408 real 0m1.333s 00:08:10.408 user 0m1.186s 00:08:10.408 sys 0m0.141s 00:08:10.408 13:08:20 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.408 13:08:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:10.408 ************************************ 00:08:10.408 END TEST thread_poller_perf 00:08:10.408 ************************************ 00:08:10.408 13:08:20 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:10.408 13:08:20 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:10.408 13:08:20 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.408 13:08:20 thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.408 ************************************ 00:08:10.408 START TEST thread_poller_perf 00:08:10.408 ************************************ 00:08:10.408 13:08:20 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:10.408 [2024-07-25 13:08:20.689237] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:10.408 [2024-07-25 13:08:20.689299] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788553 ] 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:10.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:10.408 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:10.408 [2024-07-25 13:08:20.822505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.667 [2024-07-25 13:08:20.900813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.667 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:11.604 ====================================== 00:08:11.604 busy:2502418716 (cyc) 00:08:11.604 total_run_count: 3795000 00:08:11.604 tsc_hz: 2500000000 (cyc) 00:08:11.604 ====================================== 00:08:11.604 poller_cost: 659 (cyc), 263 (nsec) 00:08:11.604 00:08:11.604 real 0m1.319s 00:08:11.604 user 0m1.176s 00:08:11.604 sys 0m0.136s 00:08:11.604 13:08:21 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.604 13:08:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:11.604 ************************************ 00:08:11.604 END TEST thread_poller_perf 00:08:11.604 ************************************ 00:08:11.604 13:08:22 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:11.604 00:08:11.604 real 0m2.913s 00:08:11.604 user 0m2.461s 00:08:11.604 sys 0m0.462s 00:08:11.604 13:08:22 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.604 13:08:22 thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.604 ************************************ 00:08:11.604 END TEST thread 00:08:11.604 ************************************ 00:08:11.604 13:08:22 -- spdk/autotest.sh@184 -- # [[ 1 -eq 1 ]] 00:08:11.604 13:08:22 -- spdk/autotest.sh@185 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:08:11.604 13:08:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.604 13:08:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.604 13:08:22 -- common/autotest_common.sh@10 -- # set +x 00:08:11.862 ************************************ 00:08:11.862 START TEST accel 00:08:11.862 ************************************ 00:08:11.862 13:08:22 accel -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:08:11.862 * Looking for test storage... 00:08:11.862 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:08:11.862 13:08:22 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:11.862 13:08:22 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:11.862 13:08:22 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:11.862 13:08:22 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=788875 00:08:11.862 13:08:22 accel -- accel/accel.sh@63 -- # waitforlisten 788875 00:08:11.862 13:08:22 accel -- common/autotest_common.sh@831 -- # '[' -z 788875 ']' 00:08:11.862 13:08:22 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.862 13:08:22 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.862 13:08:22 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:11.862 13:08:22 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.862 13:08:22 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:11.862 13:08:22 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.862 13:08:22 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.862 13:08:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.862 13:08:22 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.862 13:08:22 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.862 13:08:22 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.862 13:08:22 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.862 13:08:22 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:11.862 13:08:22 accel -- accel/accel.sh@41 -- # jq -r . 00:08:11.862 [2024-07-25 13:08:22.282337] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:11.862 [2024-07-25 13:08:22.282403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788875 ] 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:12.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.121 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:12.121 [2024-07-25 13:08:22.415878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.121 [2024-07-25 13:08:22.502069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.691 13:08:23 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.691 13:08:23 accel -- common/autotest_common.sh@864 -- # return 0 00:08:12.691 13:08:23 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:12.691 13:08:23 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:12.691 13:08:23 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:12.691 13:08:23 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:12.691 13:08:23 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:12.691 13:08:23 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:12.691 13:08:23 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:12.691 13:08:23 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.691 13:08:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.950 13:08:23 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.950 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.950 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.950 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.951 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.951 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.951 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.951 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.951 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.951 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.951 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.951 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.951 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.951 13:08:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:12.951 13:08:23 accel -- accel/accel.sh@72 -- # IFS== 00:08:12.951 13:08:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:12.951 13:08:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:12.951 13:08:23 accel -- accel/accel.sh@75 -- # killprocess 788875 00:08:12.951 13:08:23 accel -- common/autotest_common.sh@950 -- # '[' -z 788875 ']' 00:08:12.951 13:08:23 accel -- common/autotest_common.sh@954 -- # kill -0 788875 00:08:12.951 13:08:23 accel -- common/autotest_common.sh@955 -- # uname 00:08:12.951 13:08:23 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.951 13:08:23 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 788875 00:08:12.951 13:08:23 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.951 13:08:23 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.951 13:08:23 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 788875' 00:08:12.951 killing process with pid 788875 00:08:12.951 13:08:23 accel -- common/autotest_common.sh@969 -- # kill 788875 00:08:12.951 13:08:23 accel -- common/autotest_common.sh@974 -- # wait 788875 00:08:13.210 13:08:23 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:13.210 13:08:23 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:13.210 13:08:23 accel -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:13.210 13:08:23 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.210 13:08:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.210 13:08:23 accel.accel_help -- common/autotest_common.sh@1125 -- # accel_perf -h 00:08:13.210 13:08:23 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:13.210 13:08:23 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:13.210 13:08:23 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.210 13:08:23 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.210 13:08:23 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.210 13:08:23 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.210 13:08:23 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.210 13:08:23 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:13.210 13:08:23 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:13.210 13:08:23 accel.accel_help -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.210 13:08:23 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:13.469 13:08:23 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:13.469 13:08:23 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:13.469 13:08:23 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.469 13:08:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.469 ************************************ 00:08:13.469 START TEST accel_missing_filename 00:08:13.469 ************************************ 00:08:13.469 13:08:23 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress 00:08:13.469 13:08:23 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # local es=0 00:08:13.470 13:08:23 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:13.470 13:08:23 accel.accel_missing_filename -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:13.470 13:08:23 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.470 13:08:23 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:13.470 13:08:23 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.470 13:08:23 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:08:13.470 13:08:23 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:13.470 13:08:23 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:13.470 13:08:23 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.470 13:08:23 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.470 13:08:23 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.470 13:08:23 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.470 13:08:23 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.470 13:08:23 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:13.470 13:08:23 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:13.470 [2024-07-25 13:08:23.803609] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:13.470 [2024-07-25 13:08:23.803671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789177 ] 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:13.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.470 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:13.470 [2024-07-25 13:08:23.935523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.729 [2024-07-25 13:08:24.016944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.729 [2024-07-25 13:08:24.083758] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.729 [2024-07-25 13:08:24.149440] accel_perf.c:1540:main: *ERROR*: ERROR starting application 00:08:13.988 A filename is required. 00:08:13.988 13:08:24 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # es=234 00:08:13.988 13:08:24 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.988 13:08:24 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=106 00:08:13.988 13:08:24 accel.accel_missing_filename -- common/autotest_common.sh@663 -- # case "$es" in 00:08:13.988 13:08:24 accel.accel_missing_filename -- common/autotest_common.sh@670 -- # es=1 00:08:13.988 13:08:24 accel.accel_missing_filename -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.988 00:08:13.988 real 0m0.463s 00:08:13.988 user 0m0.296s 00:08:13.988 sys 0m0.191s 00:08:13.988 13:08:24 accel.accel_missing_filename -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.988 13:08:24 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:13.988 ************************************ 00:08:13.988 END TEST accel_missing_filename 00:08:13.988 ************************************ 00:08:13.988 13:08:24 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:13.988 13:08:24 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:13.989 13:08:24 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.989 13:08:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.989 ************************************ 00:08:13.989 START TEST accel_compress_verify 00:08:13.989 ************************************ 00:08:13.989 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:13.989 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # local es=0 00:08:13.989 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:13.989 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:13.989 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.989 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:13.989 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.989 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:13.989 13:08:24 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:13.989 13:08:24 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:13.989 13:08:24 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.989 13:08:24 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.989 13:08:24 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.989 13:08:24 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.989 13:08:24 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.989 13:08:24 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:13.989 13:08:24 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:13.989 [2024-07-25 13:08:24.338353] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:13.989 [2024-07-25 13:08:24.338410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789252 ] 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:13.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.989 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:13.989 [2024-07-25 13:08:24.469276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.248 [2024-07-25 13:08:24.556261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.248 [2024-07-25 13:08:24.617998] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.248 [2024-07-25 13:08:24.680563] accel_perf.c:1540:main: *ERROR*: ERROR starting application 00:08:14.508 00:08:14.508 Compression does not support the verify option, aborting. 00:08:14.508 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # es=161 00:08:14.508 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.508 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=33 00:08:14.508 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@663 -- # case "$es" in 00:08:14.508 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@670 -- # es=1 00:08:14.508 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.508 00:08:14.508 real 0m0.461s 00:08:14.508 user 0m0.308s 00:08:14.508 sys 0m0.178s 00:08:14.508 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.508 13:08:24 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:14.508 ************************************ 00:08:14.508 END TEST accel_compress_verify 00:08:14.508 ************************************ 00:08:14.508 13:08:24 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:14.508 13:08:24 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:14.508 13:08:24 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.508 13:08:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.508 ************************************ 00:08:14.508 START TEST accel_wrong_workload 00:08:14.508 ************************************ 00:08:14.508 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w foobar 00:08:14.508 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # local es=0 00:08:14.508 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:14.508 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:14.508 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.508 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:14.508 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.508 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:08:14.508 13:08:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:14.508 13:08:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:14.508 13:08:24 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.508 13:08:24 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.508 13:08:24 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.508 13:08:24 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.508 13:08:24 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.508 13:08:24 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:14.508 13:08:24 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:14.508 Unsupported workload type: foobar 00:08:14.509 [2024-07-25 13:08:24.874986] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:14.509 accel_perf options: 00:08:14.509 [-h help message] 00:08:14.509 [-q queue depth per core] 00:08:14.509 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:14.509 [-T number of threads per core 00:08:14.509 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:14.509 [-t time in seconds] 00:08:14.509 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:14.509 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:14.509 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:14.509 [-l for compress/decompress workloads, name of uncompressed input file 00:08:14.509 [-S for crc32c workload, use this seed value (default 0) 00:08:14.509 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:14.509 [-f for fill workload, use this BYTE value (default 255) 00:08:14.509 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:14.509 [-y verify result if this switch is on] 00:08:14.509 [-a tasks to allocate per core (default: same value as -q)] 00:08:14.509 Can be used to spread operations across a wider range of memory. 00:08:14.509 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # es=1 00:08:14.509 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.509 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.509 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.509 00:08:14.509 real 0m0.043s 00:08:14.509 user 0m0.027s 00:08:14.509 sys 0m0.016s 00:08:14.509 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.509 13:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:14.509 ************************************ 00:08:14.509 END TEST accel_wrong_workload 00:08:14.509 ************************************ 00:08:14.509 Error: writing output failed: Broken pipe 00:08:14.509 13:08:24 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:14.509 13:08:24 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:14.509 13:08:24 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.509 13:08:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.509 ************************************ 00:08:14.509 START TEST accel_negative_buffers 00:08:14.509 ************************************ 00:08:14.509 13:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:14.509 13:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # local es=0 00:08:14.509 13:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:14.509 13:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:14.509 13:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.509 13:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:14.509 13:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.509 13:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:08:14.509 13:08:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:14.509 13:08:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:14.509 13:08:24 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.509 13:08:24 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.509 13:08:24 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.509 13:08:24 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.509 13:08:24 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.509 13:08:24 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:14.509 13:08:24 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:14.769 -x option must be non-negative. 00:08:14.769 [2024-07-25 13:08:24.996829] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:14.769 accel_perf options: 00:08:14.769 [-h help message] 00:08:14.769 [-q queue depth per core] 00:08:14.769 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:14.769 [-T number of threads per core 00:08:14.769 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:14.769 [-t time in seconds] 00:08:14.769 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:14.769 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:14.769 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:14.769 [-l for compress/decompress workloads, name of uncompressed input file 00:08:14.769 [-S for crc32c workload, use this seed value (default 0) 00:08:14.769 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:14.769 [-f for fill workload, use this BYTE value (default 255) 00:08:14.769 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:14.769 [-y verify result if this switch is on] 00:08:14.769 [-a tasks to allocate per core (default: same value as -q)] 00:08:14.769 Can be used to spread operations across a wider range of memory. 00:08:14.769 13:08:25 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # es=1 00:08:14.769 13:08:25 accel.accel_negative_buffers -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.769 13:08:25 accel.accel_negative_buffers -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.769 13:08:25 accel.accel_negative_buffers -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.769 00:08:14.769 real 0m0.042s 00:08:14.769 user 0m0.023s 00:08:14.769 sys 0m0.019s 00:08:14.769 13:08:25 accel.accel_negative_buffers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.769 13:08:25 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:14.769 ************************************ 00:08:14.769 END TEST accel_negative_buffers 00:08:14.769 ************************************ 00:08:14.769 Error: writing output failed: Broken pipe 00:08:14.769 13:08:25 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:14.769 13:08:25 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:14.769 13:08:25 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.769 13:08:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.769 ************************************ 00:08:14.769 START TEST accel_crc32c 00:08:14.769 ************************************ 00:08:14.770 13:08:25 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:14.770 13:08:25 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:14.770 [2024-07-25 13:08:25.107322] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:14.770 [2024-07-25 13:08:25.107382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789518 ] 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:14.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.770 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:14.770 [2024-07-25 13:08:25.238532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.029 [2024-07-25 13:08:25.321971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.029 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.030 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:15.030 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.030 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.030 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.030 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.030 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.030 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.030 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.030 13:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.030 13:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.030 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.030 13:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.407 13:08:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:16.408 13:08:26 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.408 00:08:16.408 real 0m1.455s 00:08:16.408 user 0m0.011s 00:08:16.408 sys 0m0.000s 00:08:16.408 13:08:26 accel.accel_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.408 13:08:26 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:16.408 ************************************ 00:08:16.408 END TEST accel_crc32c 00:08:16.408 ************************************ 00:08:16.408 13:08:26 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:16.408 13:08:26 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:16.408 13:08:26 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.408 13:08:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.408 ************************************ 00:08:16.408 START TEST accel_crc32c_C2 00:08:16.408 ************************************ 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:16.408 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:16.408 [2024-07-25 13:08:26.639364] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:16.408 [2024-07-25 13:08:26.639423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789802 ] 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:16.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.408 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:16.408 [2024-07-25 13:08:26.774683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.408 [2024-07-25 13:08:26.853194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.668 13:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.607 00:08:17.607 real 0m1.458s 00:08:17.607 user 0m0.009s 00:08:17.607 sys 0m0.003s 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.607 13:08:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:17.607 ************************************ 00:08:17.607 END TEST accel_crc32c_C2 00:08:17.607 ************************************ 00:08:17.867 13:08:28 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:17.867 13:08:28 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:17.867 13:08:28 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.867 13:08:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.867 ************************************ 00:08:17.867 START TEST accel_copy 00:08:17.867 ************************************ 00:08:17.867 13:08:28 accel.accel_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy -y 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:17.867 13:08:28 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:17.867 [2024-07-25 13:08:28.174112] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:17.867 [2024-07-25 13:08:28.174176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790081 ] 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:17.867 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.867 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:17.867 [2024-07-25 13:08:28.303376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.127 [2024-07-25 13:08:28.385866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.127 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.128 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.128 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.128 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.128 13:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.128 13:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.128 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.128 13:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:19.507 13:08:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.508 13:08:29 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:19.508 13:08:29 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.508 00:08:19.508 real 0m1.451s 00:08:19.508 user 0m0.008s 00:08:19.508 sys 0m0.003s 00:08:19.508 13:08:29 accel.accel_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.508 13:08:29 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:19.508 ************************************ 00:08:19.508 END TEST accel_copy 00:08:19.508 ************************************ 00:08:19.508 13:08:29 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:19.508 13:08:29 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:19.508 13:08:29 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.508 13:08:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.508 ************************************ 00:08:19.508 START TEST accel_fill 00:08:19.508 ************************************ 00:08:19.508 13:08:29 accel.accel_fill -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:19.508 [2024-07-25 13:08:29.685285] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:19.508 [2024-07-25 13:08:29.685345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790367 ] 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:19.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.508 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:19.508 [2024-07-25 13:08:29.816386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.508 [2024-07-25 13:08:29.898872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.508 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:19.509 13:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:20.889 13:08:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.889 00:08:20.889 real 0m1.458s 00:08:20.889 user 0m0.009s 00:08:20.889 sys 0m0.002s 00:08:20.889 13:08:31 accel.accel_fill -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.889 13:08:31 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:20.889 ************************************ 00:08:20.889 END TEST accel_fill 00:08:20.889 ************************************ 00:08:20.889 13:08:31 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:20.889 13:08:31 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:20.889 13:08:31 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.889 13:08:31 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.889 ************************************ 00:08:20.889 START TEST accel_copy_crc32c 00:08:20.889 ************************************ 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:20.889 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:20.889 [2024-07-25 13:08:31.213934] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:20.889 [2024-07-25 13:08:31.213989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790646 ] 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:20.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.889 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:20.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.890 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:20.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.890 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:20.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.890 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:20.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.890 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:20.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.890 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:20.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.890 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:20.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.890 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:20.890 [2024-07-25 13:08:31.345391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.149 [2024-07-25 13:08:31.428076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:21.149 13:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.529 00:08:22.529 real 0m1.453s 00:08:22.529 user 0m0.007s 00:08:22.529 sys 0m0.004s 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.529 13:08:32 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:22.529 ************************************ 00:08:22.529 END TEST accel_copy_crc32c 00:08:22.529 ************************************ 00:08:22.529 13:08:32 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:22.529 13:08:32 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:22.529 13:08:32 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.529 13:08:32 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.529 ************************************ 00:08:22.529 START TEST accel_copy_crc32c_C2 00:08:22.529 ************************************ 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:22.529 [2024-07-25 13:08:32.726323] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:22.529 [2024-07-25 13:08:32.726378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790932 ] 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.529 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:22.529 [2024-07-25 13:08:32.855814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.529 [2024-07-25 13:08:32.938012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.529 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:22.530 13:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.939 00:08:23.939 real 0m1.440s 00:08:23.939 user 0m0.008s 00:08:23.939 sys 0m0.004s 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.939 13:08:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:23.939 ************************************ 00:08:23.939 END TEST accel_copy_crc32c_C2 00:08:23.939 ************************************ 00:08:23.939 13:08:34 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:23.939 13:08:34 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:23.939 13:08:34 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.939 13:08:34 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.939 ************************************ 00:08:23.939 START TEST accel_dualcast 00:08:23.939 ************************************ 00:08:23.939 13:08:34 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dualcast -y 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:23.939 13:08:34 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:23.939 [2024-07-25 13:08:34.252036] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:23.939 [2024-07-25 13:08:34.252099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791215 ] 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:23.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:23.939 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:23.939 [2024-07-25 13:08:34.383528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.198 [2024-07-25 13:08:34.467189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:24.198 13:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:25.576 13:08:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.576 00:08:25.576 real 0m1.461s 00:08:25.576 user 0m0.009s 00:08:25.576 sys 0m0.002s 00:08:25.576 13:08:35 accel.accel_dualcast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.576 13:08:35 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:25.576 ************************************ 00:08:25.576 END TEST accel_dualcast 00:08:25.576 ************************************ 00:08:25.576 13:08:35 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:25.576 13:08:35 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:25.576 13:08:35 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.576 13:08:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:25.576 ************************************ 00:08:25.576 START TEST accel_compare 00:08:25.576 ************************************ 00:08:25.576 13:08:35 accel.accel_compare -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compare -y 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:25.576 13:08:35 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:25.576 [2024-07-25 13:08:35.795970] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:25.576 [2024-07-25 13:08:35.796091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791501 ] 00:08:25.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.576 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:25.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.576 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:25.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.576 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:25.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.576 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:25.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.576 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:25.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.576 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:25.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.576 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:25.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.576 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:25.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.576 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:25.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.576 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:25.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.576 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:25.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.576 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:25.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.577 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:25.577 [2024-07-25 13:08:36.000381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.836 [2024-07-25 13:08:36.082854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.836 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.837 13:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:27.215 13:08:37 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.215 00:08:27.215 real 0m1.541s 00:08:27.215 user 0m0.011s 00:08:27.215 sys 0m0.000s 00:08:27.215 13:08:37 accel.accel_compare -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.215 13:08:37 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:27.215 ************************************ 00:08:27.215 END TEST accel_compare 00:08:27.215 ************************************ 00:08:27.216 13:08:37 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:27.216 13:08:37 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:27.216 13:08:37 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.216 13:08:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.216 ************************************ 00:08:27.216 START TEST accel_xor 00:08:27.216 ************************************ 00:08:27.216 13:08:37 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:27.216 [2024-07-25 13:08:37.381043] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:27.216 [2024-07-25 13:08:37.381082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791783 ] 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:27.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.216 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:27.216 [2024-07-25 13:08:37.498333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.216 [2024-07-25 13:08:37.580887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:27.216 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.217 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.217 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.217 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.217 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.217 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.217 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.217 13:08:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.217 13:08:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.217 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.217 13:08:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.596 00:08:28.596 real 0m1.422s 00:08:28.596 user 0m0.010s 00:08:28.596 sys 0m0.001s 00:08:28.596 13:08:38 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.596 13:08:38 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:28.596 ************************************ 00:08:28.596 END TEST accel_xor 00:08:28.596 ************************************ 00:08:28.596 13:08:38 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:28.596 13:08:38 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:28.596 13:08:38 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.596 13:08:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.596 ************************************ 00:08:28.596 START TEST accel_xor 00:08:28.596 ************************************ 00:08:28.596 13:08:38 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y -x 3 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:28.596 13:08:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:28.596 [2024-07-25 13:08:38.899026] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:28.596 [2024-07-25 13:08:38.899081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792067 ] 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.596 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:28.596 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.597 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:28.597 [2024-07-25 13:08:39.027869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.856 [2024-07-25 13:08:39.111028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.856 13:08:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.235 13:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.235 13:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.235 13:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.235 13:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.235 13:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.235 13:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.235 13:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:30.236 13:08:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.236 00:08:30.236 real 0m1.446s 00:08:30.236 user 0m0.010s 00:08:30.236 sys 0m0.002s 00:08:30.236 13:08:40 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.236 13:08:40 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:30.236 ************************************ 00:08:30.236 END TEST accel_xor 00:08:30.236 ************************************ 00:08:30.236 13:08:40 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:30.236 13:08:40 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:30.236 13:08:40 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.236 13:08:40 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.236 ************************************ 00:08:30.236 START TEST accel_dif_verify 00:08:30.236 ************************************ 00:08:30.236 13:08:40 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_verify 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:30.236 [2024-07-25 13:08:40.400854] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:30.236 [2024-07-25 13:08:40.400894] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792350 ] 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:30.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.236 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:30.236 [2024-07-25 13:08:40.516122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.236 [2024-07-25 13:08:40.598580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.236 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.237 13:08:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:31.612 13:08:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.612 00:08:31.612 real 0m1.426s 00:08:31.612 user 0m0.011s 00:08:31.612 sys 0m0.002s 00:08:31.612 13:08:41 accel.accel_dif_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.612 13:08:41 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:31.612 ************************************ 00:08:31.612 END TEST accel_dif_verify 00:08:31.612 ************************************ 00:08:31.612 13:08:41 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:31.612 13:08:41 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:31.612 13:08:41 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.612 13:08:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.612 ************************************ 00:08:31.612 START TEST accel_dif_generate 00:08:31.612 ************************************ 00:08:31.612 13:08:41 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:31.612 13:08:41 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:31.612 [2024-07-25 13:08:41.914951] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:31.612 [2024-07-25 13:08:41.915013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792633 ] 00:08:31.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.612 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:31.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.612 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:31.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.612 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:31.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.612 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:31.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.612 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:31.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.612 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:31.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.612 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:31.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.612 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:31.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:31.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.613 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:31.613 [2024-07-25 13:08:42.046961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.873 [2024-07-25 13:08:42.129426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.873 13:08:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:33.253 13:08:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.253 00:08:33.253 real 0m1.455s 00:08:33.253 user 0m0.011s 00:08:33.253 sys 0m0.001s 00:08:33.253 13:08:43 accel.accel_dif_generate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.253 13:08:43 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:33.253 ************************************ 00:08:33.253 END TEST accel_dif_generate 00:08:33.253 ************************************ 00:08:33.253 13:08:43 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:33.253 13:08:43 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:33.253 13:08:43 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.253 13:08:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.253 ************************************ 00:08:33.253 START TEST accel_dif_generate_copy 00:08:33.253 ************************************ 00:08:33.253 13:08:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate_copy 00:08:33.253 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:33.253 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:33.253 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.253 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.253 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:33.253 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:33.254 [2024-07-25 13:08:43.440240] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:33.254 [2024-07-25 13:08:43.440295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792914 ] 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:33.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.254 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:33.254 [2024-07-25 13:08:43.570339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.254 [2024-07-25 13:08:43.652651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.254 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.255 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.255 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.255 13:08:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.630 00:08:34.630 real 0m1.456s 00:08:34.630 user 0m0.010s 00:08:34.630 sys 0m0.002s 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.630 13:08:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:34.630 ************************************ 00:08:34.630 END TEST accel_dif_generate_copy 00:08:34.630 ************************************ 00:08:34.630 13:08:44 accel -- accel/accel.sh@114 -- # run_test accel_dix_verify accel_test -t 1 -w dix_verify 00:08:34.630 13:08:44 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:34.630 13:08:44 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.630 13:08:44 accel -- common/autotest_common.sh@10 -- # set +x 00:08:34.630 ************************************ 00:08:34.630 START TEST accel_dix_verify 00:08:34.630 ************************************ 00:08:34.630 13:08:44 accel.accel_dix_verify -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dix_verify 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@17 -- # local accel_module 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dix_verify 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dix_verify 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:34.630 13:08:44 accel.accel_dix_verify -- accel/accel.sh@41 -- # jq -r . 00:08:34.630 [2024-07-25 13:08:44.968229] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:34.630 [2024-07-25 13:08:44.968286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793198 ] 00:08:34.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.630 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:34.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.630 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:34.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.630 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:34.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.630 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:34.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.630 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:34.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.630 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:34.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.631 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:34.631 [2024-07-25 13:08:45.100477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.890 [2024-07-25 13:08:45.183040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=0x1 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=dix_verify 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@23 -- # accel_opc=dix_verify 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=software 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=32 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=32 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=1 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=No 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.890 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:34.891 13:08:45 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:34.891 13:08:45 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:34.891 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:34.891 13:08:45 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@27 -- # [[ -n dix_verify ]] 00:08:36.264 13:08:46 accel.accel_dix_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.264 00:08:36.264 real 0m1.459s 00:08:36.264 user 0m0.011s 00:08:36.264 sys 0m0.001s 00:08:36.264 13:08:46 accel.accel_dix_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.264 13:08:46 accel.accel_dix_verify -- common/autotest_common.sh@10 -- # set +x 00:08:36.264 ************************************ 00:08:36.264 END TEST accel_dix_verify 00:08:36.264 ************************************ 00:08:36.264 13:08:46 accel -- accel/accel.sh@115 -- # run_test accel_dix_generate accel_test -t 1 -w dif_generate 00:08:36.264 13:08:46 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:36.264 13:08:46 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.264 13:08:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.264 ************************************ 00:08:36.264 START TEST accel_dix_generate 00:08:36.264 ************************************ 00:08:36.264 13:08:46 accel.accel_dix_generate -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@17 -- # local accel_module 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:36.264 13:08:46 accel.accel_dix_generate -- accel/accel.sh@41 -- # jq -r . 00:08:36.264 [2024-07-25 13:08:46.480470] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:36.264 [2024-07-25 13:08:46.480511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793477 ] 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:36.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.264 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:36.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.265 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:36.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.265 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:36.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.265 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:36.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.265 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:36.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.265 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:36.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.265 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:36.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.265 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:36.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.265 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:36.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.265 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:36.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.265 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:36.265 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.265 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:36.265 [2024-07-25 13:08:46.597694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.265 [2024-07-25 13:08:46.680292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=0x1 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=software 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=32 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=32 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=1 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=No 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.265 13:08:46 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:37.640 13:08:47 accel.accel_dix_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.640 00:08:37.640 real 0m1.423s 00:08:37.640 user 0m0.013s 00:08:37.640 sys 0m0.000s 00:08:37.640 13:08:47 accel.accel_dix_generate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.640 13:08:47 accel.accel_dix_generate -- common/autotest_common.sh@10 -- # set +x 00:08:37.640 ************************************ 00:08:37.640 END TEST accel_dix_generate 00:08:37.640 ************************************ 00:08:37.640 13:08:47 accel -- accel/accel.sh@117 -- # [[ y == y ]] 00:08:37.640 13:08:47 accel -- accel/accel.sh@118 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:37.640 13:08:47 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:37.640 13:08:47 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.640 13:08:47 accel -- common/autotest_common.sh@10 -- # set +x 00:08:37.640 ************************************ 00:08:37.640 START TEST accel_comp 00:08:37.640 ************************************ 00:08:37.640 13:08:47 accel.accel_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:37.640 13:08:47 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:37.640 [2024-07-25 13:08:47.996126] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:37.640 [2024-07-25 13:08:47.996192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793763 ] 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:37.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.640 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:37.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.641 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:37.641 [2024-07-25 13:08:48.125587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.901 [2024-07-25 13:08:48.208187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:37.901 13:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:39.317 13:08:49 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.317 00:08:39.317 real 0m1.459s 00:08:39.317 user 0m0.010s 00:08:39.317 sys 0m0.002s 00:08:39.317 13:08:49 accel.accel_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.317 13:08:49 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:39.317 ************************************ 00:08:39.317 END TEST accel_comp 00:08:39.317 ************************************ 00:08:39.317 13:08:49 accel -- accel/accel.sh@119 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:39.317 13:08:49 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:39.317 13:08:49 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.317 13:08:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:39.317 ************************************ 00:08:39.317 START TEST accel_decomp 00:08:39.317 ************************************ 00:08:39.317 13:08:49 accel.accel_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:39.317 13:08:49 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:39.317 13:08:49 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:39.317 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.317 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.317 13:08:49 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:39.318 13:08:49 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:39.318 13:08:49 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:39.318 13:08:49 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:39.318 13:08:49 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:39.318 13:08:49 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.318 13:08:49 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.318 13:08:49 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:39.318 13:08:49 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:39.318 13:08:49 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:39.318 [2024-07-25 13:08:49.544215] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:39.318 [2024-07-25 13:08:49.544338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794042 ] 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:39.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.318 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:39.318 [2024-07-25 13:08:49.751699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.578 [2024-07-25 13:08:49.840101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.578 13:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.959 13:08:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.959 13:08:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.959 13:08:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:40.960 13:08:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:40.960 00:08:40.960 real 0m1.564s 00:08:40.960 user 0m0.010s 00:08:40.960 sys 0m0.002s 00:08:40.960 13:08:51 accel.accel_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.960 13:08:51 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:40.960 ************************************ 00:08:40.960 END TEST accel_decomp 00:08:40.960 ************************************ 00:08:40.960 13:08:51 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:40.960 13:08:51 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:40.960 13:08:51 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.960 13:08:51 accel -- common/autotest_common.sh@10 -- # set +x 00:08:40.960 ************************************ 00:08:40.960 START TEST accel_decomp_full 00:08:40.960 ************************************ 00:08:40.960 13:08:51 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:40.960 [2024-07-25 13:08:51.164200] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:40.960 [2024-07-25 13:08:51.164256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794330 ] 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:40.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:40.960 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:40.960 [2024-07-25 13:08:51.296701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.960 [2024-07-25 13:08:51.379373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.960 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:40.961 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.220 13:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:42.158 13:08:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:42.158 00:08:42.158 real 0m1.467s 00:08:42.158 user 0m0.010s 00:08:42.158 sys 0m0.002s 00:08:42.158 13:08:52 accel.accel_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.158 13:08:52 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:42.158 ************************************ 00:08:42.158 END TEST accel_decomp_full 00:08:42.158 ************************************ 00:08:42.158 13:08:52 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:42.158 13:08:52 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:42.158 13:08:52 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.158 13:08:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:42.418 ************************************ 00:08:42.418 START TEST accel_decomp_mcore 00:08:42.418 ************************************ 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:42.418 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:42.418 [2024-07-25 13:08:52.703415] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:42.418 [2024-07-25 13:08:52.703471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794609 ] 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:42.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.418 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:42.418 [2024-07-25 13:08:52.836298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.678 [2024-07-25 13:08:52.923844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.678 [2024-07-25 13:08:52.923941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.678 [2024-07-25 13:08:52.924030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.678 [2024-07-25 13:08:52.924034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.678 13:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:44.059 00:08:44.059 real 0m1.469s 00:08:44.059 user 0m4.671s 00:08:44.059 sys 0m0.190s 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.059 13:08:54 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:44.059 ************************************ 00:08:44.059 END TEST accel_decomp_mcore 00:08:44.059 ************************************ 00:08:44.059 13:08:54 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:44.059 13:08:54 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:44.059 13:08:54 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.059 13:08:54 accel -- common/autotest_common.sh@10 -- # set +x 00:08:44.059 ************************************ 00:08:44.059 START TEST accel_decomp_full_mcore 00:08:44.059 ************************************ 00:08:44.059 13:08:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:44.059 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:44.059 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:44.059 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:44.060 [2024-07-25 13:08:54.246931] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:44.060 [2024-07-25 13:08:54.246987] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794897 ] 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:44.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.060 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:44.060 [2024-07-25 13:08:54.380025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.060 [2024-07-25 13:08:54.466879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.060 [2024-07-25 13:08:54.466975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.060 [2024-07-25 13:08:54.467064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.060 [2024-07-25 13:08:54.467067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.060 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.061 13:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:45.441 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:45.442 00:08:45.442 real 0m1.481s 00:08:45.442 user 0m4.715s 00:08:45.442 sys 0m0.189s 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.442 13:08:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:45.442 ************************************ 00:08:45.442 END TEST accel_decomp_full_mcore 00:08:45.442 ************************************ 00:08:45.442 13:08:55 accel -- accel/accel.sh@123 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:45.442 13:08:55 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:45.442 13:08:55 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.442 13:08:55 accel -- common/autotest_common.sh@10 -- # set +x 00:08:45.442 ************************************ 00:08:45.442 START TEST accel_decomp_mthread 00:08:45.442 ************************************ 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:45.442 13:08:55 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:45.442 [2024-07-25 13:08:55.803207] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:45.442 [2024-07-25 13:08:55.803266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795181 ] 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:45.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.442 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:45.702 [2024-07-25 13:08:55.934220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.702 [2024-07-25 13:08:56.015869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.702 13:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:47.082 00:08:47.082 real 0m1.462s 00:08:47.082 user 0m0.011s 00:08:47.082 sys 0m0.001s 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.082 13:08:57 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:47.082 ************************************ 00:08:47.082 END TEST accel_decomp_mthread 00:08:47.082 ************************************ 00:08:47.083 13:08:57 accel -- accel/accel.sh@124 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:47.083 13:08:57 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:47.083 13:08:57 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.083 13:08:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:47.083 ************************************ 00:08:47.083 START TEST accel_decomp_full_mthread 00:08:47.083 ************************************ 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:47.083 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:47.083 [2024-07-25 13:08:57.339041] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:47.083 [2024-07-25 13:08:57.339095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795467 ] 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:47.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.083 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:47.083 [2024-07-25 13:08:57.467664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.083 [2024-07-25 13:08:57.549630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.343 13:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:48.724 00:08:48.724 real 0m1.488s 00:08:48.724 user 0m0.012s 00:08:48.724 sys 0m0.001s 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.724 13:08:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:48.724 ************************************ 00:08:48.724 END TEST accel_decomp_full_mthread 00:08:48.724 ************************************ 00:08:48.724 13:08:58 accel -- accel/accel.sh@126 -- # [[ y == y ]] 00:08:48.724 13:08:58 accel -- accel/accel.sh@127 -- # COMPRESSDEV=1 00:08:48.724 13:08:58 accel -- accel/accel.sh@128 -- # get_expected_opcs 00:08:48.724 13:08:58 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:48.724 13:08:58 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=795746 00:08:48.724 13:08:58 accel -- accel/accel.sh@63 -- # waitforlisten 795746 00:08:48.724 13:08:58 accel -- common/autotest_common.sh@831 -- # '[' -z 795746 ']' 00:08:48.724 13:08:58 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.724 13:08:58 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.724 13:08:58 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.724 13:08:58 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.724 13:08:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:48.724 13:08:58 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:48.724 13:08:58 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:48.724 13:08:58 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:48.724 13:08:58 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:48.724 13:08:58 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:48.725 13:08:58 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:48.725 13:08:58 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:48.725 13:08:58 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:48.725 13:08:58 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:48.725 13:08:58 accel -- accel/accel.sh@41 -- # jq -r . 00:08:48.725 [2024-07-25 13:08:58.892036] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:48.725 [2024-07-25 13:08:58.892097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795746 ] 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:48.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:48.725 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:48.725 [2024-07-25 13:08:59.022494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.725 [2024-07-25 13:08:59.108411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.664 [2024-07-25 13:08:59.790423] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:49.664 13:08:59 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.664 13:08:59 accel -- common/autotest_common.sh@864 -- # return 0 00:08:49.664 13:08:59 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:49.664 13:08:59 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:49.664 13:08:59 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:49.664 13:08:59 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:08:49.664 13:08:59 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:08:49.664 13:08:59 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:08:49.664 13:08:59 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.664 13:08:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.664 13:08:59 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:08:49.664 13:08:59 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:08:49.664 13:09:00 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.664 "method": "compressdev_scan_accel_module", 00:08:49.664 13:09:00 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:49.664 13:09:00 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:49.664 13:09:00 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:49.664 13:09:00 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.664 13:09:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.664 13:09:00 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.923 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.923 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.923 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.923 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.923 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.923 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.923 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.923 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.923 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.923 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.923 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.923 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # IFS== 00:08:49.924 13:09:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:49.924 13:09:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:49.924 13:09:00 accel -- accel/accel.sh@75 -- # killprocess 795746 00:08:49.924 13:09:00 accel -- common/autotest_common.sh@950 -- # '[' -z 795746 ']' 00:08:49.924 13:09:00 accel -- common/autotest_common.sh@954 -- # kill -0 795746 00:08:49.924 13:09:00 accel -- common/autotest_common.sh@955 -- # uname 00:08:49.924 13:09:00 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.924 13:09:00 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 795746 00:08:49.924 13:09:00 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.924 13:09:00 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.924 13:09:00 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 795746' 00:08:49.924 killing process with pid 795746 00:08:49.924 13:09:00 accel -- common/autotest_common.sh@969 -- # kill 795746 00:08:49.924 13:09:00 accel -- common/autotest_common.sh@974 -- # wait 795746 00:08:50.184 13:09:00 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:50.184 13:09:00 accel -- accel/accel.sh@129 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:50.184 13:09:00 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:50.184 13:09:00 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.184 13:09:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:50.184 ************************************ 00:08:50.184 START TEST accel_cdev_comp 00:08:50.184 ************************************ 00:08:50.184 13:09:00 accel.accel_cdev_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:50.184 13:09:00 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:08:50.184 [2024-07-25 13:09:00.598352] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:50.184 [2024-07-25 13:09:00.598405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796072 ] 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:50.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.184 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:50.444 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.444 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:50.444 [2024-07-25 13:09:00.730731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.444 [2024-07-25 13:09:00.814716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.381 [2024-07-25 13:09:01.500368] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:51.381 [2024-07-25 13:09:01.502722] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x206ffe0 PMD being used: compress_qat 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 [2024-07-25 13:09:01.506558] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2274d70 PMD being used: compress_qat 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.381 13:09:01 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:52.319 13:09:02 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:52.319 00:08:52.319 real 0m2.080s 00:08:52.319 user 0m0.007s 00:08:52.319 sys 0m0.003s 00:08:52.319 13:09:02 accel.accel_cdev_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.319 13:09:02 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:08:52.319 ************************************ 00:08:52.319 END TEST accel_cdev_comp 00:08:52.319 ************************************ 00:08:52.319 13:09:02 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:52.319 13:09:02 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:52.319 13:09:02 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.319 13:09:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:52.319 ************************************ 00:08:52.319 START TEST accel_cdev_decomp 00:08:52.319 ************************************ 00:08:52.319 13:09:02 accel.accel_cdev_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:52.319 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:52.319 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:52.319 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.319 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.319 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:52.319 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:52.319 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:52.319 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:52.320 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:52.320 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:52.320 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.320 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:52.320 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:52.320 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:52.320 13:09:02 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:52.320 [2024-07-25 13:09:02.765331] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:52.320 [2024-07-25 13:09:02.765388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796459 ] 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:52.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.579 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:52.579 [2024-07-25 13:09:02.896452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.579 [2024-07-25 13:09:02.981071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.515 [2024-07-25 13:09:03.664419] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:53.515 [2024-07-25 13:09:03.666799] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xa69fe0 PMD being used: compress_qat 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 [2024-07-25 13:09:03.670719] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc6ed70 PMD being used: compress_qat 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.515 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:53.516 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.516 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.516 13:09:03 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:54.515 00:08:54.515 real 0m2.089s 00:08:54.515 user 0m0.007s 00:08:54.515 sys 0m0.001s 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.515 13:09:04 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:54.515 ************************************ 00:08:54.515 END TEST accel_cdev_decomp 00:08:54.515 ************************************ 00:08:54.515 13:09:04 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:54.515 13:09:04 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:54.515 13:09:04 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.515 13:09:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:54.515 ************************************ 00:08:54.515 START TEST accel_cdev_decomp_full 00:08:54.515 ************************************ 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:54.515 13:09:04 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:54.515 [2024-07-25 13:09:04.933428] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:54.515 [2024-07-25 13:09:04.933484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797254 ] 00:08:54.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.515 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:54.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.775 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:54.775 [2024-07-25 13:09:05.063248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.775 [2024-07-25 13:09:05.148393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.714 [2024-07-25 13:09:05.833582] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:55.714 [2024-07-25 13:09:05.835914] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x24dafe0 PMD being used: compress_qat 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 [2024-07-25 13:09:05.838988] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x24de2b0 PMD being used: compress_qat 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.714 13:09:05 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:56.653 00:08:56.653 real 0m2.092s 00:08:56.653 user 0m0.005s 00:08:56.653 sys 0m0.004s 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.653 13:09:06 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:56.653 ************************************ 00:08:56.653 END TEST accel_cdev_decomp_full 00:08:56.653 ************************************ 00:08:56.653 13:09:07 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:56.653 13:09:07 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:56.653 13:09:07 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.653 13:09:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:56.653 ************************************ 00:08:56.653 START TEST accel_cdev_decomp_mcore 00:08:56.653 ************************************ 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:56.653 13:09:07 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:56.653 [2024-07-25 13:09:07.098555] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:56.653 [2024-07-25 13:09:07.098610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797685 ] 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.913 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:56.913 [2024-07-25 13:09:07.230526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.913 [2024-07-25 13:09:07.317717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.913 [2024-07-25 13:09:07.317809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.913 [2024-07-25 13:09:07.317914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.913 [2024-07-25 13:09:07.317918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.852 [2024-07-25 13:09:07.998392] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:57.852 [2024-07-25 13:09:08.000759] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x102d600 PMD being used: compress_qat 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:57.852 [2024-07-25 13:09:08.005820] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7ffb0819b8f0 PMD being used: compress_qat 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 [2024-07-25 13:09:08.006890] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7ffb0019b8f0 PMD being used: compress_qat 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 [2024-07-25 13:09:08.007548] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1032890 PMD being used: compress_qat 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:57.852 [2024-07-25 13:09:08.007691] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7ffaf819b8f0 PMD being used: compress_qat 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.852 13:09:08 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.792 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:58.793 00:08:58.793 real 0m2.103s 00:08:58.793 user 0m0.009s 00:08:58.793 sys 0m0.002s 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.793 13:09:09 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:58.793 ************************************ 00:08:58.793 END TEST accel_cdev_decomp_mcore 00:08:58.793 ************************************ 00:08:58.793 13:09:09 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:58.793 13:09:09 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:58.793 13:09:09 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.793 13:09:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:58.793 ************************************ 00:08:58.793 START TEST accel_cdev_decomp_full_mcore 00:08:58.793 ************************************ 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:58.793 13:09:09 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:58.793 [2024-07-25 13:09:09.277049] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:08:58.793 [2024-07-25 13:09:09.277109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798140 ] 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:59.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.053 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:59.053 [2024-07-25 13:09:09.412017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.053 [2024-07-25 13:09:09.499913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.053 [2024-07-25 13:09:09.500005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.053 [2024-07-25 13:09:09.500092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.053 [2024-07-25 13:09:09.500096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.991 [2024-07-25 13:09:10.180391] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:59.991 [2024-07-25 13:09:10.182793] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x16a8600 PMD being used: compress_qat 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:59.991 [2024-07-25 13:09:10.187087] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f715819b8f0 PMD being used: compress_qat 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 [2024-07-25 13:09:10.188178] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f715019b8f0 PMD being used: compress_qat 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 [2024-07-25 13:09:10.188883] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x16a86a0 PMD being used: compress_qat 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:59.991 [2024-07-25 13:09:10.189024] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f714819b8f0 PMD being used: compress_qat 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.991 13:09:10 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:09:00.930 00:09:00.930 real 0m2.110s 00:09:00.930 user 0m0.012s 00:09:00.930 sys 0m0.001s 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.930 13:09:11 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:00.930 ************************************ 00:09:00.930 END TEST accel_cdev_decomp_full_mcore 00:09:00.930 ************************************ 00:09:00.930 13:09:11 accel -- accel/accel.sh@134 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:00.930 13:09:11 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:00.930 13:09:11 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.930 13:09:11 accel -- common/autotest_common.sh@10 -- # set +x 00:09:01.190 ************************************ 00:09:01.190 START TEST accel_cdev_decomp_mthread 00:09:01.190 ************************************ 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:01.190 13:09:11 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:01.190 [2024-07-25 13:09:11.466243] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:09:01.190 [2024-07-25 13:09:11.466300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798521 ] 00:09:01.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.190 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:01.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.190 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:01.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.191 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:01.191 [2024-07-25 13:09:11.597536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.450 [2024-07-25 13:09:11.680654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.019 [2024-07-25 13:09:12.364910] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:09:02.019 [2024-07-25 13:09:12.367275] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x26e4fe0 PMD being used: compress_qat 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.019 [2024-07-25 13:09:12.371874] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x26ea1c0 PMD being used: compress_qat 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.019 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.020 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:02.020 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.020 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.020 [2024-07-25 13:09:12.374087] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x280cce0 PMD being used: compress_qat 00:09:02.020 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.020 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.020 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.020 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.020 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.020 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.020 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.020 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.020 13:09:12 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:09:03.399 00:09:03.399 real 0m2.094s 00:09:03.399 user 0m0.009s 00:09:03.399 sys 0m0.001s 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.399 13:09:13 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:03.399 ************************************ 00:09:03.399 END TEST accel_cdev_decomp_mthread 00:09:03.399 ************************************ 00:09:03.399 13:09:13 accel -- accel/accel.sh@135 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:03.399 13:09:13 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:03.399 13:09:13 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.400 13:09:13 accel -- common/autotest_common.sh@10 -- # set +x 00:09:03.400 ************************************ 00:09:03.400 START TEST accel_cdev_decomp_full_mthread 00:09:03.400 ************************************ 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:03.400 13:09:13 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:03.400 [2024-07-25 13:09:13.620738] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:09:03.400 [2024-07-25 13:09:13.620792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798830 ] 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:03.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.400 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:03.400 [2024-07-25 13:09:13.750621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.400 [2024-07-25 13:09:13.833441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.338 [2024-07-25 13:09:14.527280] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:09:04.338 [2024-07-25 13:09:14.529636] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xb2ffe0 PMD being used: compress_qat 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.338 [2024-07-25 13:09:14.533417] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xb30080 PMD being used: compress_qat 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.338 [2024-07-25 13:09:14.535791] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xd34cd0 PMD being used: compress_qat 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.338 13:09:14 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:09:05.277 00:09:05.277 real 0m2.090s 00:09:05.277 user 0m0.006s 00:09:05.277 sys 0m0.003s 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.277 13:09:15 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:05.277 ************************************ 00:09:05.277 END TEST accel_cdev_decomp_full_mthread 00:09:05.277 ************************************ 00:09:05.277 13:09:15 accel -- accel/accel.sh@136 -- # unset COMPRESSDEV 00:09:05.278 13:09:15 accel -- accel/accel.sh@139 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:05.278 13:09:15 accel -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:05.278 13:09:15 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.278 13:09:15 accel -- common/autotest_common.sh@10 -- # set +x 00:09:05.278 13:09:15 accel -- accel/accel.sh@139 -- # build_accel_config 00:09:05.278 13:09:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:05.278 13:09:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:05.278 13:09:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:05.278 13:09:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:05.278 13:09:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:05.278 13:09:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:05.278 13:09:15 accel -- accel/accel.sh@41 -- # jq -r . 00:09:05.278 ************************************ 00:09:05.278 START TEST accel_dif_functional_tests 00:09:05.278 ************************************ 00:09:05.537 13:09:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:05.537 [2024-07-25 13:09:15.820302] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:09:05.537 [2024-07-25 13:09:15.820356] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid799344 ] 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:05.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:05.537 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:05.537 [2024-07-25 13:09:15.952356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:05.797 [2024-07-25 13:09:16.037241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.797 [2024-07-25 13:09:16.037334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.797 [2024-07-25 13:09:16.037339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.797 00:09:05.797 00:09:05.797 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.797 http://cunit.sourceforge.net/ 00:09:05.797 00:09:05.797 00:09:05.797 Suite: accel_dif 00:09:05.797 Test: verify: DIF generated, GUARD check ...passed 00:09:05.797 Test: verify: DIX generated, GUARD check ...passed 00:09:05.797 Test: verify: DIF generated, APPTAG check ...passed 00:09:05.797 Test: verify: DIX generated, APPTAG check ...passed 00:09:05.797 Test: verify: DIF generated, REFTAG check ...passed 00:09:05.797 Test: verify: DIX generated, REFTAG check ...passed 00:09:05.797 Test: verify: DIF not generated, GUARD check ...[2024-07-25 13:09:16.122490] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:05.797 passed 00:09:05.797 Test: verify: DIX not generated, GUARD check ...[2024-07-25 13:09:16.122558] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=0, Actual=7867 00:09:05.797 passed 00:09:05.797 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 13:09:16.122592] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:05.797 passed 00:09:05.797 Test: verify: DIX not generated, APPTAG check ...[2024-07-25 13:09:16.122623] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=0 00:09:05.797 passed 00:09:05.797 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 13:09:16.122654] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:05.797 passed 00:09:05.797 Test: verify: DIX not generated, REFTAG check ...[2024-07-25 13:09:16.122688] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=0 00:09:05.797 passed 00:09:05.797 Test: verify: DIF APPTAG correct, APPTAG check ...passed 00:09:05.797 Test: verify: DIX APPTAG correct, APPTAG check ...passed 00:09:05.797 Test: verify: DIF APPTAG incorrect, APPTAG check ...[2024-07-25 13:09:16.122786] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:05.797 passed 00:09:05.797 Test: verify: DIX APPTAG incorrect, APPTAG check ...[2024-07-25 13:09:16.122825] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:05.797 passed 00:09:05.797 Test: verify: DIF APPTAG incorrect, no APPTAG check ...passed 00:09:05.797 Test: verify: DIX APPTAG incorrect, no APPTAG check ...passed 00:09:05.797 Test: verify: DIF REFTAG incorrect, REFTAG ignore ...passed 00:09:05.797 Test: verify: DIX REFTAG incorrect, REFTAG ignore ...passed 00:09:05.797 Test: verify: DIF REFTAG_INIT correct, REFTAG check ...passed 00:09:05.797 Test: verify: DIX REFTAG_INIT correct, REFTAG check ...passed 00:09:05.797 Test: verify: DIF REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 13:09:16.123088] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:05.797 passed 00:09:05.797 Test: verify: DIX REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 13:09:16.123136] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:05.797 passed 00:09:05.797 Test: verify copy: DIF generated, GUARD check ...passed 00:09:05.797 Test: verify copy: DIF generated, APPTAG check ...passed 00:09:05.797 Test: verify copy: DIF generated, REFTAG check ...passed 00:09:05.797 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 13:09:16.123290] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:05.797 passed 00:09:05.797 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 13:09:16.123324] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:05.797 passed 00:09:05.797 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 13:09:16.123355] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:05.797 passed 00:09:05.797 Test: generate copy: DIF generated, GUARD check ...passed 00:09:05.797 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:05.797 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:05.797 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:05.797 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:05.797 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:05.797 Test: generate copy: DIF iovecs-len validate ...[2024-07-25 13:09:16.123582] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:05.797 passed 00:09:05.797 Test: generate copy: DIF buffer alignment validate ...passed 00:09:05.797 00:09:05.797 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.797 suites 1 1 n/a 0 0 00:09:05.797 tests 38 38 38 0 0 00:09:05.797 asserts 170 170 170 0 n/a 00:09:05.797 00:09:05.797 Elapsed time = 0.003 seconds 00:09:06.057 00:09:06.057 real 0m0.545s 00:09:06.057 user 0m0.706s 00:09:06.057 sys 0m0.225s 00:09:06.057 13:09:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.057 13:09:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:06.057 ************************************ 00:09:06.057 END TEST accel_dif_functional_tests 00:09:06.057 ************************************ 00:09:06.057 00:09:06.057 real 0m54.241s 00:09:06.057 user 1m1.819s 00:09:06.057 sys 0m11.543s 00:09:06.057 13:09:16 accel -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.057 13:09:16 accel -- common/autotest_common.sh@10 -- # set +x 00:09:06.057 ************************************ 00:09:06.057 END TEST accel 00:09:06.057 ************************************ 00:09:06.057 13:09:16 -- spdk/autotest.sh@186 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:06.057 13:09:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.057 13:09:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.057 13:09:16 -- common/autotest_common.sh@10 -- # set +x 00:09:06.057 ************************************ 00:09:06.057 START TEST accel_rpc 00:09:06.057 ************************************ 00:09:06.057 13:09:16 accel_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:06.057 * Looking for test storage... 00:09:06.057 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:09:06.057 13:09:16 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:06.057 13:09:16 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=799420 00:09:06.057 13:09:16 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 799420 00:09:06.057 13:09:16 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:06.057 13:09:16 accel_rpc -- common/autotest_common.sh@831 -- # '[' -z 799420 ']' 00:09:06.057 13:09:16 accel_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.057 13:09:16 accel_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.057 13:09:16 accel_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.057 13:09:16 accel_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.057 13:09:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.316 [2024-07-25 13:09:16.597740] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:09:06.316 [2024-07-25 13:09:16.597803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid799420 ] 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.316 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:06.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:06.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.317 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:06.317 [2024-07-25 13:09:16.731979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.576 [2024-07-25 13:09:16.817954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.145 13:09:17 accel_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.145 13:09:17 accel_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:07.145 13:09:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:07.145 13:09:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:07.145 13:09:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:07.145 13:09:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:07.145 13:09:17 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:07.145 13:09:17 accel_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.145 13:09:17 accel_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.145 13:09:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.145 ************************************ 00:09:07.145 START TEST accel_assign_opcode 00:09:07.145 ************************************ 00:09:07.145 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # accel_assign_opcode_test_suite 00:09:07.145 13:09:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:07.145 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.145 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:07.145 [2024-07-25 13:09:17.528175] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:07.145 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.145 13:09:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:07.145 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.145 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:07.145 [2024-07-25 13:09:17.536191] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:07.145 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.145 13:09:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:07.145 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.145 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:07.404 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.404 13:09:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:07.404 13:09:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:07.404 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.404 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:07.404 13:09:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:07.404 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.404 software 00:09:07.404 00:09:07.404 real 0m0.275s 00:09:07.404 user 0m0.052s 00:09:07.404 sys 0m0.011s 00:09:07.404 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.404 13:09:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:07.404 ************************************ 00:09:07.404 END TEST accel_assign_opcode 00:09:07.404 ************************************ 00:09:07.404 13:09:17 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 799420 00:09:07.404 13:09:17 accel_rpc -- common/autotest_common.sh@950 -- # '[' -z 799420 ']' 00:09:07.404 13:09:17 accel_rpc -- common/autotest_common.sh@954 -- # kill -0 799420 00:09:07.404 13:09:17 accel_rpc -- common/autotest_common.sh@955 -- # uname 00:09:07.404 13:09:17 accel_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.404 13:09:17 accel_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 799420 00:09:07.662 13:09:17 accel_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.662 13:09:17 accel_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.663 13:09:17 accel_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 799420' 00:09:07.663 killing process with pid 799420 00:09:07.663 13:09:17 accel_rpc -- common/autotest_common.sh@969 -- # kill 799420 00:09:07.663 13:09:17 accel_rpc -- common/autotest_common.sh@974 -- # wait 799420 00:09:07.921 00:09:07.921 real 0m1.799s 00:09:07.921 user 0m1.847s 00:09:07.921 sys 0m0.583s 00:09:07.921 13:09:18 accel_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.921 13:09:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.921 ************************************ 00:09:07.921 END TEST accel_rpc 00:09:07.921 ************************************ 00:09:07.921 13:09:18 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:09:07.921 13:09:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.921 13:09:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.921 13:09:18 -- common/autotest_common.sh@10 -- # set +x 00:09:07.921 ************************************ 00:09:07.921 START TEST app_cmdline 00:09:07.921 ************************************ 00:09:07.921 13:09:18 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:09:07.922 * Looking for test storage... 00:09:08.183 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:09:08.183 13:09:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:08.183 13:09:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=799843 00:09:08.183 13:09:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 799843 00:09:08.183 13:09:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:08.183 13:09:18 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 799843 ']' 00:09:08.183 13:09:18 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.183 13:09:18 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.183 13:09:18 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.183 13:09:18 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.183 13:09:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:08.183 [2024-07-25 13:09:18.483100] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:09:08.183 [2024-07-25 13:09:18.483178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid799843 ] 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.183 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:08.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.184 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:08.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.184 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:08.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.184 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:08.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.184 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:08.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.184 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:08.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.184 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:08.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.184 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:08.184 [2024-07-25 13:09:18.614841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.485 [2024-07-25 13:09:18.703887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.053 13:09:19 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.053 13:09:19 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:09:09.053 13:09:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:09.312 { 00:09:09.312 "version": "SPDK v24.09-pre git sha1 325310f6a", 00:09:09.312 "fields": { 00:09:09.312 "major": 24, 00:09:09.312 "minor": 9, 00:09:09.312 "patch": 0, 00:09:09.312 "suffix": "-pre", 00:09:09.312 "commit": "325310f6a" 00:09:09.312 } 00:09:09.312 } 00:09:09.312 13:09:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:09.312 13:09:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:09.312 13:09:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:09.312 13:09:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:09.312 13:09:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:09.312 13:09:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:09.312 13:09:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.312 13:09:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:09.312 13:09:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:09.312 13:09:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:09:09.312 13:09:19 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:09.571 request: 00:09:09.571 { 00:09:09.571 "method": "env_dpdk_get_mem_stats", 00:09:09.571 "req_id": 1 00:09:09.571 } 00:09:09.571 Got JSON-RPC error response 00:09:09.571 response: 00:09:09.571 { 00:09:09.571 "code": -32601, 00:09:09.571 "message": "Method not found" 00:09:09.571 } 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:09.571 13:09:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 799843 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 799843 ']' 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 799843 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 799843 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 799843' 00:09:09.571 killing process with pid 799843 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@969 -- # kill 799843 00:09:09.571 13:09:19 app_cmdline -- common/autotest_common.sh@974 -- # wait 799843 00:09:09.830 00:09:09.830 real 0m1.958s 00:09:09.830 user 0m2.355s 00:09:09.830 sys 0m0.583s 00:09:09.830 13:09:20 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.830 13:09:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:09.830 ************************************ 00:09:09.830 END TEST app_cmdline 00:09:09.830 ************************************ 00:09:09.830 13:09:20 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:09:09.830 13:09:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:09.830 13:09:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.830 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:09:10.089 ************************************ 00:09:10.089 START TEST version 00:09:10.089 ************************************ 00:09:10.089 13:09:20 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:09:10.089 * Looking for test storage... 00:09:10.089 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:09:10.089 13:09:20 version -- app/version.sh@17 -- # get_header_version major 00:09:10.089 13:09:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:09:10.089 13:09:20 version -- app/version.sh@14 -- # cut -f2 00:09:10.089 13:09:20 version -- app/version.sh@14 -- # tr -d '"' 00:09:10.089 13:09:20 version -- app/version.sh@17 -- # major=24 00:09:10.089 13:09:20 version -- app/version.sh@18 -- # get_header_version minor 00:09:10.089 13:09:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:09:10.089 13:09:20 version -- app/version.sh@14 -- # cut -f2 00:09:10.089 13:09:20 version -- app/version.sh@14 -- # tr -d '"' 00:09:10.089 13:09:20 version -- app/version.sh@18 -- # minor=9 00:09:10.089 13:09:20 version -- app/version.sh@19 -- # get_header_version patch 00:09:10.089 13:09:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:09:10.089 13:09:20 version -- app/version.sh@14 -- # cut -f2 00:09:10.089 13:09:20 version -- app/version.sh@14 -- # tr -d '"' 00:09:10.089 13:09:20 version -- app/version.sh@19 -- # patch=0 00:09:10.089 13:09:20 version -- app/version.sh@20 -- # get_header_version suffix 00:09:10.089 13:09:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:09:10.089 13:09:20 version -- app/version.sh@14 -- # cut -f2 00:09:10.089 13:09:20 version -- app/version.sh@14 -- # tr -d '"' 00:09:10.089 13:09:20 version -- app/version.sh@20 -- # suffix=-pre 00:09:10.089 13:09:20 version -- app/version.sh@22 -- # version=24.9 00:09:10.089 13:09:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:10.090 13:09:20 version -- app/version.sh@28 -- # version=24.9rc0 00:09:10.090 13:09:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:09:10.090 13:09:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:10.090 13:09:20 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:10.090 13:09:20 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:10.090 00:09:10.090 real 0m0.192s 00:09:10.090 user 0m0.092s 00:09:10.090 sys 0m0.151s 00:09:10.090 13:09:20 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.090 13:09:20 version -- common/autotest_common.sh@10 -- # set +x 00:09:10.090 ************************************ 00:09:10.090 END TEST version 00:09:10.090 ************************************ 00:09:10.349 13:09:20 -- spdk/autotest.sh@192 -- # '[' 1 -eq 1 ']' 00:09:10.349 13:09:20 -- spdk/autotest.sh@193 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:09:10.349 13:09:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:10.349 13:09:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.349 13:09:20 -- common/autotest_common.sh@10 -- # set +x 00:09:10.349 ************************************ 00:09:10.349 START TEST blockdev_general 00:09:10.349 ************************************ 00:09:10.349 13:09:20 blockdev_general -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:09:10.349 * Looking for test storage... 00:09:10.349 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:10.349 13:09:20 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=800403 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:10.349 13:09:20 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 800403 00:09:10.349 13:09:20 blockdev_general -- common/autotest_common.sh@831 -- # '[' -z 800403 ']' 00:09:10.349 13:09:20 blockdev_general -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.349 13:09:20 blockdev_general -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.349 13:09:20 blockdev_general -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.350 13:09:20 blockdev_general -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.350 13:09:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:10.350 13:09:20 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:09:10.350 [2024-07-25 13:09:20.797508] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:09:10.350 [2024-07-25 13:09:20.797571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800403 ] 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.612 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:10.612 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:10.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.613 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:10.613 [2024-07-25 13:09:20.931263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.613 [2024-07-25 13:09:21.016225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.550 13:09:21 blockdev_general -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.550 13:09:21 blockdev_general -- common/autotest_common.sh@864 -- # return 0 00:09:11.550 13:09:21 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:09:11.550 13:09:21 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:09:11.550 13:09:21 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:09:11.550 13:09:21 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.550 13:09:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:11.809 [2024-07-25 13:09:22.200079] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:11.809 [2024-07-25 13:09:22.200133] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:11.809 00:09:11.809 [2024-07-25 13:09:22.208072] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:11.809 [2024-07-25 13:09:22.208097] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:11.809 00:09:11.809 Malloc0 00:09:11.809 Malloc1 00:09:11.809 Malloc2 00:09:11.809 Malloc3 00:09:11.809 Malloc4 00:09:11.809 Malloc5 00:09:12.068 Malloc6 00:09:12.068 Malloc7 00:09:12.068 Malloc8 00:09:12.068 Malloc9 00:09:12.068 [2024-07-25 13:09:22.341119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:12.068 [2024-07-25 13:09:22.341170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.068 [2024-07-25 13:09:22.341187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15937a0 00:09:12.068 [2024-07-25 13:09:22.341198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.068 [2024-07-25 13:09:22.342422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.068 [2024-07-25 13:09:22.342449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:12.068 TestPT 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.068 13:09:22 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:09:12.068 5000+0 records in 00:09:12.068 5000+0 records out 00:09:12.068 10240000 bytes (10 MB, 9.8 MiB) copied, 0.036729 s, 279 MB/s 00:09:12.068 13:09:22 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.068 AIO0 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.068 13:09:22 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.068 13:09:22 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:09:12.068 13:09:22 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.068 13:09:22 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.068 13:09:22 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.068 13:09:22 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:09:12.068 13:09:22 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:09:12.068 13:09:22 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.068 13:09:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.328 13:09:22 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.328 13:09:22 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:09:12.328 13:09:22 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:09:12.330 13:09:22 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3b808569-5006-4bf6-b730-7f8599e97edf"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3b808569-5006-4bf6-b730-7f8599e97edf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "cf8815c4-9c4d-5211-aaf6-9e90f052b924"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cf8815c4-9c4d-5211-aaf6-9e90f052b924",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "3112f734-0144-5fc4-96af-64386df9f73c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3112f734-0144-5fc4-96af-64386df9f73c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "dc1ffe69-c04f-5a38-a70e-4e5b87f8a2b4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dc1ffe69-c04f-5a38-a70e-4e5b87f8a2b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "ae4d7ea3-29ff-556e-9b5c-afb8876b0370"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ae4d7ea3-29ff-556e-9b5c-afb8876b0370",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "ed6219d7-2950-585d-8727-1830b5245326"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ed6219d7-2950-585d-8727-1830b5245326",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ecf98253-73d7-53fe-9929-3414f513bac0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ecf98253-73d7-53fe-9929-3414f513bac0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "36e3592e-750f-5875-b52a-d2fd9dac43c3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "36e3592e-750f-5875-b52a-d2fd9dac43c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8adffbb4-1e2e-5d77-b384-2ca45ba86537"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8adffbb4-1e2e-5d77-b384-2ca45ba86537",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "fc26b94f-c978-5a03-b64f-2b2404ba7fbc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fc26b94f-c978-5a03-b64f-2b2404ba7fbc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "50fe448f-0154-5ce4-a63d-c3b2809e2983"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "50fe448f-0154-5ce4-a63d-c3b2809e2983",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "ba0fd3f2-41f1-5ed1-a815-aa41ef0b8867"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ba0fd3f2-41f1-5ed1-a815-aa41ef0b8867",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "abe5c8f1-ae94-4926-b1a2-7aa846e6925d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "abe5c8f1-ae94-4926-b1a2-7aa846e6925d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "abe5c8f1-ae94-4926-b1a2-7aa846e6925d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "77cafd76-8026-483a-bf9f-516cfe300e35",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "18d8d4d7-4fbe-4c6e-851c-85e73644d78a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "5029a9cf-a6cd-44ef-a865-e479debb31f4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5029a9cf-a6cd-44ef-a865-e479debb31f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5029a9cf-a6cd-44ef-a865-e479debb31f4",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "6ad27282-156f-4561-a619-c9fcafa17583",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "f1e8ac09-9112-46d7-9914-7e97ee1df239",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ee7e4046-a52e-4593-a529-cc5c8b44412b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ee7e4046-a52e-4593-a529-cc5c8b44412b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ee7e4046-a52e-4593-a529-cc5c8b44412b",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "3f0fce89-5697-409f-97d3-4387f9e498c9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "f6258df2-de48-484f-ac3b-46d1ab22219c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "d34ba14a-1aac-4406-88fc-8783c235370c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "d34ba14a-1aac-4406-88fc-8783c235370c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:09:12.330 13:09:22 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:09:12.330 13:09:22 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:09:12.330 13:09:22 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:09:12.330 13:09:22 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 800403 00:09:12.330 13:09:22 blockdev_general -- common/autotest_common.sh@950 -- # '[' -z 800403 ']' 00:09:12.330 13:09:22 blockdev_general -- common/autotest_common.sh@954 -- # kill -0 800403 00:09:12.330 13:09:22 blockdev_general -- common/autotest_common.sh@955 -- # uname 00:09:12.330 13:09:22 blockdev_general -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.330 13:09:22 blockdev_general -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 800403 00:09:12.589 13:09:22 blockdev_general -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.589 13:09:22 blockdev_general -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.589 13:09:22 blockdev_general -- common/autotest_common.sh@968 -- # echo 'killing process with pid 800403' 00:09:12.589 killing process with pid 800403 00:09:12.589 13:09:22 blockdev_general -- common/autotest_common.sh@969 -- # kill 800403 00:09:12.589 13:09:22 blockdev_general -- common/autotest_common.sh@974 -- # wait 800403 00:09:12.847 13:09:23 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:12.847 13:09:23 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:09:12.847 13:09:23 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:12.847 13:09:23 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.847 13:09:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.847 ************************************ 00:09:12.847 START TEST bdev_hello_world 00:09:12.847 ************************************ 00:09:12.847 13:09:23 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:09:12.847 [2024-07-25 13:09:23.334597] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:09:12.847 [2024-07-25 13:09:23.334651] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800864 ] 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.106 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:13.106 [2024-07-25 13:09:23.469067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.106 [2024-07-25 13:09:23.551248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.365 [2024-07-25 13:09:23.704686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:13.365 [2024-07-25 13:09:23.704734] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:13.365 [2024-07-25 13:09:23.704748] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:13.365 [2024-07-25 13:09:23.712692] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:13.365 [2024-07-25 13:09:23.712717] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:13.365 [2024-07-25 13:09:23.720702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:13.365 [2024-07-25 13:09:23.720725] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:13.365 [2024-07-25 13:09:23.791527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:13.365 [2024-07-25 13:09:23.791575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.365 [2024-07-25 13:09:23.791591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19eccc0 00:09:13.365 [2024-07-25 13:09:23.791602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.365 [2024-07-25 13:09:23.793126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.365 [2024-07-25 13:09:23.793169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:13.623 [2024-07-25 13:09:23.923666] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:13.623 [2024-07-25 13:09:23.923734] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:09:13.623 [2024-07-25 13:09:23.923788] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:13.623 [2024-07-25 13:09:23.923860] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:13.623 [2024-07-25 13:09:23.923937] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:13.623 [2024-07-25 13:09:23.923969] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:13.623 [2024-07-25 13:09:23.924033] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:13.623 00:09:13.623 [2024-07-25 13:09:23.924075] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:13.882 00:09:13.882 real 0m0.915s 00:09:13.882 user 0m0.601s 00:09:13.882 sys 0m0.281s 00:09:13.882 13:09:24 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.882 13:09:24 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:13.882 ************************************ 00:09:13.882 END TEST bdev_hello_world 00:09:13.882 ************************************ 00:09:13.882 13:09:24 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:13.882 13:09:24 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:13.882 13:09:24 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.882 13:09:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:13.882 ************************************ 00:09:13.882 START TEST bdev_bounds 00:09:13.882 ************************************ 00:09:13.882 13:09:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:09:13.882 13:09:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=800975 00:09:13.882 13:09:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:13.882 13:09:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 800975' 00:09:13.882 Process bdevio pid: 800975 00:09:13.882 13:09:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 800975 00:09:13.882 13:09:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 800975 ']' 00:09:13.882 13:09:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.882 13:09:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.882 13:09:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.882 13:09:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.882 13:09:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:09:13.882 13:09:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:13.882 [2024-07-25 13:09:24.339005] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:09:13.882 [2024-07-25 13:09:24.339064] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800975 ] 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:14.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.141 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:14.141 [2024-07-25 13:09:24.472292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:14.141 [2024-07-25 13:09:24.561264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.141 [2024-07-25 13:09:24.561358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.141 [2024-07-25 13:09:24.561362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.400 [2024-07-25 13:09:24.706391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:14.400 [2024-07-25 13:09:24.706444] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:14.400 [2024-07-25 13:09:24.706458] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:14.400 [2024-07-25 13:09:24.714402] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:14.400 [2024-07-25 13:09:24.714427] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:14.400 [2024-07-25 13:09:24.722416] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:14.400 [2024-07-25 13:09:24.722440] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:14.400 [2024-07-25 13:09:24.793748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:14.400 [2024-07-25 13:09:24.793795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.400 [2024-07-25 13:09:24.793811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x247f7b0 00:09:14.400 [2024-07-25 13:09:24.793822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.400 [2024-07-25 13:09:24.795201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.400 [2024-07-25 13:09:24.795228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:14.968 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.968 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:09:14.968 13:09:25 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:14.968 I/O targets: 00:09:14.968 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:09:14.968 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:09:14.968 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:09:14.968 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:09:14.968 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:09:14.968 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:09:14.968 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:09:14.968 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:09:14.968 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:09:14.968 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:09:14.968 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:09:14.968 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:09:14.968 raid0: 131072 blocks of 512 bytes (64 MiB) 00:09:14.968 concat0: 131072 blocks of 512 bytes (64 MiB) 00:09:14.968 raid1: 65536 blocks of 512 bytes (32 MiB) 00:09:14.968 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:09:14.968 00:09:14.968 00:09:14.968 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.968 http://cunit.sourceforge.net/ 00:09:14.968 00:09:14.968 00:09:14.968 Suite: bdevio tests on: AIO0 00:09:14.968 Test: blockdev write read block ...passed 00:09:14.968 Test: blockdev write zeroes read block ...passed 00:09:14.968 Test: blockdev write zeroes read no split ...passed 00:09:14.968 Test: blockdev write zeroes read split ...passed 00:09:14.968 Test: blockdev write zeroes read split partial ...passed 00:09:14.968 Test: blockdev reset ...passed 00:09:14.968 Test: blockdev write read 8 blocks ...passed 00:09:14.968 Test: blockdev write read size > 128k ...passed 00:09:14.968 Test: blockdev write read invalid size ...passed 00:09:14.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:14.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:14.968 Test: blockdev write read max offset ...passed 00:09:14.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:14.968 Test: blockdev writev readv 8 blocks ...passed 00:09:14.968 Test: blockdev writev readv 30 x 1block ...passed 00:09:14.968 Test: blockdev writev readv block ...passed 00:09:14.968 Test: blockdev writev readv size > 128k ...passed 00:09:14.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:14.968 Test: blockdev comparev and writev ...passed 00:09:14.968 Test: blockdev nvme passthru rw ...passed 00:09:14.968 Test: blockdev nvme passthru vendor specific ...passed 00:09:14.968 Test: blockdev nvme admin passthru ...passed 00:09:14.968 Test: blockdev copy ...passed 00:09:14.968 Suite: bdevio tests on: raid1 00:09:14.968 Test: blockdev write read block ...passed 00:09:14.968 Test: blockdev write zeroes read block ...passed 00:09:14.968 Test: blockdev write zeroes read no split ...passed 00:09:14.968 Test: blockdev write zeroes read split ...passed 00:09:14.968 Test: blockdev write zeroes read split partial ...passed 00:09:14.968 Test: blockdev reset ...passed 00:09:14.968 Test: blockdev write read 8 blocks ...passed 00:09:14.968 Test: blockdev write read size > 128k ...passed 00:09:14.968 Test: blockdev write read invalid size ...passed 00:09:14.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:14.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:14.968 Test: blockdev write read max offset ...passed 00:09:14.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:14.968 Test: blockdev writev readv 8 blocks ...passed 00:09:14.968 Test: blockdev writev readv 30 x 1block ...passed 00:09:14.968 Test: blockdev writev readv block ...passed 00:09:14.968 Test: blockdev writev readv size > 128k ...passed 00:09:14.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:14.968 Test: blockdev comparev and writev ...passed 00:09:14.968 Test: blockdev nvme passthru rw ...passed 00:09:14.968 Test: blockdev nvme passthru vendor specific ...passed 00:09:14.968 Test: blockdev nvme admin passthru ...passed 00:09:14.968 Test: blockdev copy ...passed 00:09:14.968 Suite: bdevio tests on: concat0 00:09:14.968 Test: blockdev write read block ...passed 00:09:14.968 Test: blockdev write zeroes read block ...passed 00:09:14.968 Test: blockdev write zeroes read no split ...passed 00:09:14.968 Test: blockdev write zeroes read split ...passed 00:09:14.968 Test: blockdev write zeroes read split partial ...passed 00:09:14.968 Test: blockdev reset ...passed 00:09:14.968 Test: blockdev write read 8 blocks ...passed 00:09:14.968 Test: blockdev write read size > 128k ...passed 00:09:14.968 Test: blockdev write read invalid size ...passed 00:09:14.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:14.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:14.968 Test: blockdev write read max offset ...passed 00:09:14.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:14.968 Test: blockdev writev readv 8 blocks ...passed 00:09:14.968 Test: blockdev writev readv 30 x 1block ...passed 00:09:14.968 Test: blockdev writev readv block ...passed 00:09:14.968 Test: blockdev writev readv size > 128k ...passed 00:09:14.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:14.968 Test: blockdev comparev and writev ...passed 00:09:14.968 Test: blockdev nvme passthru rw ...passed 00:09:14.968 Test: blockdev nvme passthru vendor specific ...passed 00:09:14.968 Test: blockdev nvme admin passthru ...passed 00:09:14.968 Test: blockdev copy ...passed 00:09:14.968 Suite: bdevio tests on: raid0 00:09:14.968 Test: blockdev write read block ...passed 00:09:14.968 Test: blockdev write zeroes read block ...passed 00:09:14.968 Test: blockdev write zeroes read no split ...passed 00:09:14.968 Test: blockdev write zeroes read split ...passed 00:09:14.968 Test: blockdev write zeroes read split partial ...passed 00:09:14.968 Test: blockdev reset ...passed 00:09:14.968 Test: blockdev write read 8 blocks ...passed 00:09:14.968 Test: blockdev write read size > 128k ...passed 00:09:14.968 Test: blockdev write read invalid size ...passed 00:09:14.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:14.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:14.968 Test: blockdev write read max offset ...passed 00:09:14.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:14.968 Test: blockdev writev readv 8 blocks ...passed 00:09:14.968 Test: blockdev writev readv 30 x 1block ...passed 00:09:14.968 Test: blockdev writev readv block ...passed 00:09:14.968 Test: blockdev writev readv size > 128k ...passed 00:09:14.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:14.968 Test: blockdev comparev and writev ...passed 00:09:14.968 Test: blockdev nvme passthru rw ...passed 00:09:14.968 Test: blockdev nvme passthru vendor specific ...passed 00:09:14.968 Test: blockdev nvme admin passthru ...passed 00:09:14.968 Test: blockdev copy ...passed 00:09:14.968 Suite: bdevio tests on: TestPT 00:09:14.968 Test: blockdev write read block ...passed 00:09:14.968 Test: blockdev write zeroes read block ...passed 00:09:14.968 Test: blockdev write zeroes read no split ...passed 00:09:14.968 Test: blockdev write zeroes read split ...passed 00:09:14.968 Test: blockdev write zeroes read split partial ...passed 00:09:14.968 Test: blockdev reset ...passed 00:09:15.228 Test: blockdev write read 8 blocks ...passed 00:09:15.228 Test: blockdev write read size > 128k ...passed 00:09:15.228 Test: blockdev write read invalid size ...passed 00:09:15.228 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.229 Test: blockdev write read max offset ...passed 00:09:15.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.229 Test: blockdev writev readv 8 blocks ...passed 00:09:15.229 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.229 Test: blockdev writev readv block ...passed 00:09:15.229 Test: blockdev writev readv size > 128k ...passed 00:09:15.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.229 Test: blockdev comparev and writev ...passed 00:09:15.229 Test: blockdev nvme passthru rw ...passed 00:09:15.229 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.229 Test: blockdev nvme admin passthru ...passed 00:09:15.229 Test: blockdev copy ...passed 00:09:15.229 Suite: bdevio tests on: Malloc2p7 00:09:15.229 Test: blockdev write read block ...passed 00:09:15.229 Test: blockdev write zeroes read block ...passed 00:09:15.229 Test: blockdev write zeroes read no split ...passed 00:09:15.229 Test: blockdev write zeroes read split ...passed 00:09:15.229 Test: blockdev write zeroes read split partial ...passed 00:09:15.229 Test: blockdev reset ...passed 00:09:15.229 Test: blockdev write read 8 blocks ...passed 00:09:15.229 Test: blockdev write read size > 128k ...passed 00:09:15.229 Test: blockdev write read invalid size ...passed 00:09:15.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.229 Test: blockdev write read max offset ...passed 00:09:15.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.229 Test: blockdev writev readv 8 blocks ...passed 00:09:15.229 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.229 Test: blockdev writev readv block ...passed 00:09:15.229 Test: blockdev writev readv size > 128k ...passed 00:09:15.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.229 Test: blockdev comparev and writev ...passed 00:09:15.229 Test: blockdev nvme passthru rw ...passed 00:09:15.229 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.229 Test: blockdev nvme admin passthru ...passed 00:09:15.229 Test: blockdev copy ...passed 00:09:15.229 Suite: bdevio tests on: Malloc2p6 00:09:15.229 Test: blockdev write read block ...passed 00:09:15.229 Test: blockdev write zeroes read block ...passed 00:09:15.229 Test: blockdev write zeroes read no split ...passed 00:09:15.229 Test: blockdev write zeroes read split ...passed 00:09:15.229 Test: blockdev write zeroes read split partial ...passed 00:09:15.229 Test: blockdev reset ...passed 00:09:15.229 Test: blockdev write read 8 blocks ...passed 00:09:15.229 Test: blockdev write read size > 128k ...passed 00:09:15.229 Test: blockdev write read invalid size ...passed 00:09:15.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.229 Test: blockdev write read max offset ...passed 00:09:15.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.229 Test: blockdev writev readv 8 blocks ...passed 00:09:15.229 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.229 Test: blockdev writev readv block ...passed 00:09:15.229 Test: blockdev writev readv size > 128k ...passed 00:09:15.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.229 Test: blockdev comparev and writev ...passed 00:09:15.229 Test: blockdev nvme passthru rw ...passed 00:09:15.229 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.229 Test: blockdev nvme admin passthru ...passed 00:09:15.229 Test: blockdev copy ...passed 00:09:15.229 Suite: bdevio tests on: Malloc2p5 00:09:15.229 Test: blockdev write read block ...passed 00:09:15.229 Test: blockdev write zeroes read block ...passed 00:09:15.229 Test: blockdev write zeroes read no split ...passed 00:09:15.229 Test: blockdev write zeroes read split ...passed 00:09:15.229 Test: blockdev write zeroes read split partial ...passed 00:09:15.229 Test: blockdev reset ...passed 00:09:15.229 Test: blockdev write read 8 blocks ...passed 00:09:15.229 Test: blockdev write read size > 128k ...passed 00:09:15.229 Test: blockdev write read invalid size ...passed 00:09:15.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.229 Test: blockdev write read max offset ...passed 00:09:15.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.229 Test: blockdev writev readv 8 blocks ...passed 00:09:15.229 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.229 Test: blockdev writev readv block ...passed 00:09:15.229 Test: blockdev writev readv size > 128k ...passed 00:09:15.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.229 Test: blockdev comparev and writev ...passed 00:09:15.229 Test: blockdev nvme passthru rw ...passed 00:09:15.229 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.229 Test: blockdev nvme admin passthru ...passed 00:09:15.229 Test: blockdev copy ...passed 00:09:15.229 Suite: bdevio tests on: Malloc2p4 00:09:15.229 Test: blockdev write read block ...passed 00:09:15.229 Test: blockdev write zeroes read block ...passed 00:09:15.229 Test: blockdev write zeroes read no split ...passed 00:09:15.229 Test: blockdev write zeroes read split ...passed 00:09:15.229 Test: blockdev write zeroes read split partial ...passed 00:09:15.229 Test: blockdev reset ...passed 00:09:15.229 Test: blockdev write read 8 blocks ...passed 00:09:15.229 Test: blockdev write read size > 128k ...passed 00:09:15.229 Test: blockdev write read invalid size ...passed 00:09:15.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.229 Test: blockdev write read max offset ...passed 00:09:15.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.229 Test: blockdev writev readv 8 blocks ...passed 00:09:15.229 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.229 Test: blockdev writev readv block ...passed 00:09:15.229 Test: blockdev writev readv size > 128k ...passed 00:09:15.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.229 Test: blockdev comparev and writev ...passed 00:09:15.229 Test: blockdev nvme passthru rw ...passed 00:09:15.229 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.229 Test: blockdev nvme admin passthru ...passed 00:09:15.229 Test: blockdev copy ...passed 00:09:15.229 Suite: bdevio tests on: Malloc2p3 00:09:15.229 Test: blockdev write read block ...passed 00:09:15.229 Test: blockdev write zeroes read block ...passed 00:09:15.229 Test: blockdev write zeroes read no split ...passed 00:09:15.229 Test: blockdev write zeroes read split ...passed 00:09:15.229 Test: blockdev write zeroes read split partial ...passed 00:09:15.229 Test: blockdev reset ...passed 00:09:15.229 Test: blockdev write read 8 blocks ...passed 00:09:15.229 Test: blockdev write read size > 128k ...passed 00:09:15.229 Test: blockdev write read invalid size ...passed 00:09:15.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.229 Test: blockdev write read max offset ...passed 00:09:15.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.229 Test: blockdev writev readv 8 blocks ...passed 00:09:15.229 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.229 Test: blockdev writev readv block ...passed 00:09:15.229 Test: blockdev writev readv size > 128k ...passed 00:09:15.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.229 Test: blockdev comparev and writev ...passed 00:09:15.229 Test: blockdev nvme passthru rw ...passed 00:09:15.229 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.229 Test: blockdev nvme admin passthru ...passed 00:09:15.229 Test: blockdev copy ...passed 00:09:15.229 Suite: bdevio tests on: Malloc2p2 00:09:15.229 Test: blockdev write read block ...passed 00:09:15.229 Test: blockdev write zeroes read block ...passed 00:09:15.229 Test: blockdev write zeroes read no split ...passed 00:09:15.229 Test: blockdev write zeroes read split ...passed 00:09:15.229 Test: blockdev write zeroes read split partial ...passed 00:09:15.229 Test: blockdev reset ...passed 00:09:15.229 Test: blockdev write read 8 blocks ...passed 00:09:15.229 Test: blockdev write read size > 128k ...passed 00:09:15.229 Test: blockdev write read invalid size ...passed 00:09:15.229 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.229 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.229 Test: blockdev write read max offset ...passed 00:09:15.229 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.229 Test: blockdev writev readv 8 blocks ...passed 00:09:15.229 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.229 Test: blockdev writev readv block ...passed 00:09:15.229 Test: blockdev writev readv size > 128k ...passed 00:09:15.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.229 Test: blockdev comparev and writev ...passed 00:09:15.229 Test: blockdev nvme passthru rw ...passed 00:09:15.229 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.229 Test: blockdev nvme admin passthru ...passed 00:09:15.229 Test: blockdev copy ...passed 00:09:15.229 Suite: bdevio tests on: Malloc2p1 00:09:15.229 Test: blockdev write read block ...passed 00:09:15.229 Test: blockdev write zeroes read block ...passed 00:09:15.229 Test: blockdev write zeroes read no split ...passed 00:09:15.229 Test: blockdev write zeroes read split ...passed 00:09:15.230 Test: blockdev write zeroes read split partial ...passed 00:09:15.230 Test: blockdev reset ...passed 00:09:15.230 Test: blockdev write read 8 blocks ...passed 00:09:15.230 Test: blockdev write read size > 128k ...passed 00:09:15.230 Test: blockdev write read invalid size ...passed 00:09:15.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.230 Test: blockdev write read max offset ...passed 00:09:15.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.230 Test: blockdev writev readv 8 blocks ...passed 00:09:15.230 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.230 Test: blockdev writev readv block ...passed 00:09:15.230 Test: blockdev writev readv size > 128k ...passed 00:09:15.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.230 Test: blockdev comparev and writev ...passed 00:09:15.230 Test: blockdev nvme passthru rw ...passed 00:09:15.230 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.230 Test: blockdev nvme admin passthru ...passed 00:09:15.230 Test: blockdev copy ...passed 00:09:15.230 Suite: bdevio tests on: Malloc2p0 00:09:15.230 Test: blockdev write read block ...passed 00:09:15.230 Test: blockdev write zeroes read block ...passed 00:09:15.230 Test: blockdev write zeroes read no split ...passed 00:09:15.230 Test: blockdev write zeroes read split ...passed 00:09:15.230 Test: blockdev write zeroes read split partial ...passed 00:09:15.230 Test: blockdev reset ...passed 00:09:15.230 Test: blockdev write read 8 blocks ...passed 00:09:15.230 Test: blockdev write read size > 128k ...passed 00:09:15.230 Test: blockdev write read invalid size ...passed 00:09:15.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.230 Test: blockdev write read max offset ...passed 00:09:15.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.230 Test: blockdev writev readv 8 blocks ...passed 00:09:15.230 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.230 Test: blockdev writev readv block ...passed 00:09:15.230 Test: blockdev writev readv size > 128k ...passed 00:09:15.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.230 Test: blockdev comparev and writev ...passed 00:09:15.230 Test: blockdev nvme passthru rw ...passed 00:09:15.230 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.230 Test: blockdev nvme admin passthru ...passed 00:09:15.230 Test: blockdev copy ...passed 00:09:15.230 Suite: bdevio tests on: Malloc1p1 00:09:15.230 Test: blockdev write read block ...passed 00:09:15.230 Test: blockdev write zeroes read block ...passed 00:09:15.230 Test: blockdev write zeroes read no split ...passed 00:09:15.230 Test: blockdev write zeroes read split ...passed 00:09:15.230 Test: blockdev write zeroes read split partial ...passed 00:09:15.230 Test: blockdev reset ...passed 00:09:15.230 Test: blockdev write read 8 blocks ...passed 00:09:15.230 Test: blockdev write read size > 128k ...passed 00:09:15.230 Test: blockdev write read invalid size ...passed 00:09:15.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.230 Test: blockdev write read max offset ...passed 00:09:15.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.230 Test: blockdev writev readv 8 blocks ...passed 00:09:15.230 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.230 Test: blockdev writev readv block ...passed 00:09:15.230 Test: blockdev writev readv size > 128k ...passed 00:09:15.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.230 Test: blockdev comparev and writev ...passed 00:09:15.230 Test: blockdev nvme passthru rw ...passed 00:09:15.230 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.230 Test: blockdev nvme admin passthru ...passed 00:09:15.230 Test: blockdev copy ...passed 00:09:15.230 Suite: bdevio tests on: Malloc1p0 00:09:15.230 Test: blockdev write read block ...passed 00:09:15.230 Test: blockdev write zeroes read block ...passed 00:09:15.230 Test: blockdev write zeroes read no split ...passed 00:09:15.230 Test: blockdev write zeroes read split ...passed 00:09:15.230 Test: blockdev write zeroes read split partial ...passed 00:09:15.230 Test: blockdev reset ...passed 00:09:15.230 Test: blockdev write read 8 blocks ...passed 00:09:15.230 Test: blockdev write read size > 128k ...passed 00:09:15.230 Test: blockdev write read invalid size ...passed 00:09:15.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.230 Test: blockdev write read max offset ...passed 00:09:15.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.230 Test: blockdev writev readv 8 blocks ...passed 00:09:15.230 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.230 Test: blockdev writev readv block ...passed 00:09:15.230 Test: blockdev writev readv size > 128k ...passed 00:09:15.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.230 Test: blockdev comparev and writev ...passed 00:09:15.230 Test: blockdev nvme passthru rw ...passed 00:09:15.230 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.230 Test: blockdev nvme admin passthru ...passed 00:09:15.230 Test: blockdev copy ...passed 00:09:15.230 Suite: bdevio tests on: Malloc0 00:09:15.230 Test: blockdev write read block ...passed 00:09:15.230 Test: blockdev write zeroes read block ...passed 00:09:15.230 Test: blockdev write zeroes read no split ...passed 00:09:15.230 Test: blockdev write zeroes read split ...passed 00:09:15.230 Test: blockdev write zeroes read split partial ...passed 00:09:15.230 Test: blockdev reset ...passed 00:09:15.230 Test: blockdev write read 8 blocks ...passed 00:09:15.230 Test: blockdev write read size > 128k ...passed 00:09:15.230 Test: blockdev write read invalid size ...passed 00:09:15.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.230 Test: blockdev write read max offset ...passed 00:09:15.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.230 Test: blockdev writev readv 8 blocks ...passed 00:09:15.230 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.230 Test: blockdev writev readv block ...passed 00:09:15.230 Test: blockdev writev readv size > 128k ...passed 00:09:15.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.230 Test: blockdev comparev and writev ...passed 00:09:15.230 Test: blockdev nvme passthru rw ...passed 00:09:15.230 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.230 Test: blockdev nvme admin passthru ...passed 00:09:15.230 Test: blockdev copy ...passed 00:09:15.230 00:09:15.230 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.230 suites 16 16 n/a 0 0 00:09:15.230 tests 368 368 368 0 0 00:09:15.230 asserts 2224 2224 2224 0 n/a 00:09:15.230 00:09:15.230 Elapsed time = 0.475 seconds 00:09:15.230 0 00:09:15.230 13:09:25 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 800975 00:09:15.230 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 800975 ']' 00:09:15.230 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 800975 00:09:15.230 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:09:15.230 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.230 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 800975 00:09:15.230 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.230 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.230 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 800975' 00:09:15.230 killing process with pid 800975 00:09:15.230 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@969 -- # kill 800975 00:09:15.230 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@974 -- # wait 800975 00:09:15.490 13:09:25 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:15.490 00:09:15.490 real 0m1.614s 00:09:15.490 user 0m4.016s 00:09:15.490 sys 0m0.477s 00:09:15.490 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.490 13:09:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:15.490 ************************************ 00:09:15.490 END TEST bdev_bounds 00:09:15.490 ************************************ 00:09:15.490 13:09:25 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:09:15.490 13:09:25 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:15.490 13:09:25 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.490 13:09:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:15.490 ************************************ 00:09:15.490 START TEST bdev_nbd 00:09:15.490 ************************************ 00:09:15.490 13:09:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:09:15.490 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=801279 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 801279 /var/tmp/spdk-nbd.sock 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 801279 ']' 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:15.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:15.750 13:09:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:09:15.750 [2024-07-25 13:09:26.085993] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:09:15.750 [2024-07-25 13:09:26.086125] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:15.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.750 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:16.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.009 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:16.009 [2024-07-25 13:09:26.293863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.009 [2024-07-25 13:09:26.376534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.267 [2024-07-25 13:09:26.523340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:16.267 [2024-07-25 13:09:26.523392] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:16.267 [2024-07-25 13:09:26.523406] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:16.267 [2024-07-25 13:09:26.531354] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:16.267 [2024-07-25 13:09:26.531379] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:16.267 [2024-07-25 13:09:26.539362] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:16.267 [2024-07-25 13:09:26.539385] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:16.267 [2024-07-25 13:09:26.610540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:16.267 [2024-07-25 13:09:26.610584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.267 [2024-07-25 13:09:26.610598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22ab210 00:09:16.267 [2024-07-25 13:09:26.610611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.267 [2024-07-25 13:09:26.611924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.267 [2024-07-25 13:09:26.611952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:16.833 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.091 1+0 records in 00:09:17.091 1+0 records out 00:09:17.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242914 s, 16.9 MB/s 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:17.091 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.350 1+0 records in 00:09:17.350 1+0 records out 00:09:17.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284704 s, 14.4 MB/s 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:17.350 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:09:17.609 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:17.609 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:17.609 13:09:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:17.609 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:09:17.609 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:17.609 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:17.609 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:17.609 13:09:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:09:17.609 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:17.609 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:17.609 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:17.609 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.609 1+0 records in 00:09:17.609 1+0 records out 00:09:17.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338607 s, 12.1 MB/s 00:09:17.609 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.609 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:17.609 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.609 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:17.609 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:17.609 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.609 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:17.609 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.868 1+0 records in 00:09:17.868 1+0 records out 00:09:17.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342589 s, 12.0 MB/s 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:17.868 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.869 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:17.869 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.128 1+0 records in 00:09:18.128 1+0 records out 00:09:18.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325712 s, 12.6 MB/s 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:18.128 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.387 1+0 records in 00:09:18.387 1+0 records out 00:09:18.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454043 s, 9.0 MB/s 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:18.387 13:09:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.646 1+0 records in 00:09:18.646 1+0 records out 00:09:18.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414235 s, 9.9 MB/s 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:18.646 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.906 1+0 records in 00:09:18.906 1+0 records out 00:09:18.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473423 s, 8.7 MB/s 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:18.906 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:19.165 1+0 records in 00:09:19.165 1+0 records out 00:09:19.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493281 s, 8.3 MB/s 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:19.165 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:19.424 1+0 records in 00:09:19.424 1+0 records out 00:09:19.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516768 s, 7.9 MB/s 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:19.424 13:09:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:19.683 1+0 records in 00:09:19.683 1+0 records out 00:09:19.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594882 s, 6.9 MB/s 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:19.683 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:19.942 1+0 records in 00:09:19.942 1+0 records out 00:09:19.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679541 s, 6.0 MB/s 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:19.942 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:09:20.201 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:09:20.201 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:09:20.201 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:09:20.201 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:09:20.201 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:20.201 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:20.201 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:20.201 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:09:20.201 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:20.201 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:20.202 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:20.202 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:20.202 1+0 records in 00:09:20.202 1+0 records out 00:09:20.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733151 s, 5.6 MB/s 00:09:20.202 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:20.202 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:20.202 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:20.202 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:20.202 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:20.202 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:20.202 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:20.202 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:20.461 1+0 records in 00:09:20.461 1+0 records out 00:09:20.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534824 s, 7.7 MB/s 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:20.461 13:09:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:20.720 1+0 records in 00:09:20.720 1+0 records out 00:09:20.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785698 s, 5.2 MB/s 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:20.720 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:21.044 1+0 records in 00:09:21.044 1+0 records out 00:09:21.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679396 s, 6.0 MB/s 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:21.044 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:21.303 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd0", 00:09:21.303 "bdev_name": "Malloc0" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd1", 00:09:21.303 "bdev_name": "Malloc1p0" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd2", 00:09:21.303 "bdev_name": "Malloc1p1" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd3", 00:09:21.303 "bdev_name": "Malloc2p0" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd4", 00:09:21.303 "bdev_name": "Malloc2p1" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd5", 00:09:21.303 "bdev_name": "Malloc2p2" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd6", 00:09:21.303 "bdev_name": "Malloc2p3" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd7", 00:09:21.303 "bdev_name": "Malloc2p4" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd8", 00:09:21.303 "bdev_name": "Malloc2p5" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd9", 00:09:21.303 "bdev_name": "Malloc2p6" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd10", 00:09:21.303 "bdev_name": "Malloc2p7" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd11", 00:09:21.303 "bdev_name": "TestPT" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd12", 00:09:21.303 "bdev_name": "raid0" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd13", 00:09:21.303 "bdev_name": "concat0" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd14", 00:09:21.303 "bdev_name": "raid1" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd15", 00:09:21.303 "bdev_name": "AIO0" 00:09:21.303 } 00:09:21.303 ]' 00:09:21.303 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:21.303 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd0", 00:09:21.303 "bdev_name": "Malloc0" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd1", 00:09:21.303 "bdev_name": "Malloc1p0" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd2", 00:09:21.303 "bdev_name": "Malloc1p1" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd3", 00:09:21.303 "bdev_name": "Malloc2p0" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd4", 00:09:21.303 "bdev_name": "Malloc2p1" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd5", 00:09:21.303 "bdev_name": "Malloc2p2" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd6", 00:09:21.303 "bdev_name": "Malloc2p3" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd7", 00:09:21.303 "bdev_name": "Malloc2p4" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd8", 00:09:21.303 "bdev_name": "Malloc2p5" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd9", 00:09:21.303 "bdev_name": "Malloc2p6" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd10", 00:09:21.303 "bdev_name": "Malloc2p7" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd11", 00:09:21.303 "bdev_name": "TestPT" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd12", 00:09:21.303 "bdev_name": "raid0" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd13", 00:09:21.303 "bdev_name": "concat0" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd14", 00:09:21.303 "bdev_name": "raid1" 00:09:21.303 }, 00:09:21.303 { 00:09:21.303 "nbd_device": "/dev/nbd15", 00:09:21.303 "bdev_name": "AIO0" 00:09:21.303 } 00:09:21.303 ]' 00:09:21.303 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:21.303 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:09:21.303 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.304 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:09:21.304 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:21.304 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:21.304 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.304 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:21.562 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:21.562 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:21.562 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:21.562 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.562 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.562 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:21.562 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:21.562 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.562 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.562 13:09:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:21.821 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:21.821 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:21.821 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:21.821 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.821 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.821 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:21.821 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:21.821 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.821 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.821 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:22.080 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:22.080 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:22.080 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:22.080 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.080 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.080 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:22.080 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:22.080 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.080 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.080 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:22.340 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:22.340 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:22.340 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:22.340 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.340 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.340 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:22.340 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:22.340 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.340 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.340 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:22.599 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:22.599 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:22.599 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:22.599 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.599 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.599 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:22.599 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:22.599 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.599 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.599 13:09:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:22.858 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:22.859 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:22.859 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:22.859 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.859 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.859 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:22.859 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:22.859 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.859 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.859 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:23.117 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:23.117 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:23.117 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:23.117 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.117 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.117 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:23.117 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:23.117 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.117 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.117 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:09:23.376 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:09:23.376 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:09:23.376 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:09:23.376 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.376 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.376 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:09:23.376 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:23.376 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.376 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.376 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:09:23.635 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:09:23.635 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:09:23.635 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:09:23.635 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.635 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.635 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:09:23.635 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:23.635 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.635 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.635 13:09:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:09:23.895 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:09:23.895 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:09:23.895 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:09:23.895 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.895 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.895 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:09:23.895 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:23.895 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.895 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.895 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.154 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.414 13:09:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:24.674 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:24.674 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:24.674 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:24.674 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.674 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.674 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:24.674 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.674 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.674 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.674 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:24.933 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:24.933 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:24.933 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:24.933 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.933 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.933 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:24.933 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.933 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.933 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.933 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:09:25.193 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:09:25.193 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:09:25.193 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:09:25.193 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.193 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.193 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:09:25.193 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:25.193 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.193 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:25.193 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.193 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:25.453 13:09:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:25.713 /dev/nbd0 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:25.713 1+0 records in 00:09:25.713 1+0 records out 00:09:25.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271063 s, 15.1 MB/s 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:25.713 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:09:25.973 /dev/nbd1 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:25.973 1+0 records in 00:09:25.973 1+0 records out 00:09:25.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267066 s, 15.3 MB/s 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:25.973 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:09:26.233 /dev/nbd10 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:26.233 1+0 records in 00:09:26.233 1+0 records out 00:09:26.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318543 s, 12.9 MB/s 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:26.233 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:09:26.493 /dev/nbd11 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:26.493 1+0 records in 00:09:26.493 1+0 records out 00:09:26.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347249 s, 11.8 MB/s 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:26.493 13:09:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:09:26.752 /dev/nbd12 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:26.752 1+0 records in 00:09:26.752 1+0 records out 00:09:26.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355873 s, 11.5 MB/s 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:26.752 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:09:27.012 /dev/nbd13 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.012 1+0 records in 00:09:27.012 1+0 records out 00:09:27.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371322 s, 11.0 MB/s 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:27.012 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:09:27.271 /dev/nbd14 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.271 1+0 records in 00:09:27.271 1+0 records out 00:09:27.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421605 s, 9.7 MB/s 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.271 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:27.272 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:09:27.531 /dev/nbd15 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.531 1+0 records in 00:09:27.531 1+0 records out 00:09:27.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451545 s, 9.1 MB/s 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:27.531 13:09:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:09:27.791 /dev/nbd2 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.791 1+0 records in 00:09:27.791 1+0 records out 00:09:27.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378929 s, 10.8 MB/s 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:27.791 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:09:28.050 /dev/nbd3 00:09:28.050 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:09:28.050 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:09:28.050 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:09:28.050 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.051 1+0 records in 00:09:28.051 1+0 records out 00:09:28.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456571 s, 9.0 MB/s 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:28.051 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:09:28.310 /dev/nbd4 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.310 1+0 records in 00:09:28.310 1+0 records out 00:09:28.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515902 s, 7.9 MB/s 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:28.310 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:28.311 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.311 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:28.311 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:09:28.570 /dev/nbd5 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.570 1+0 records in 00:09:28.570 1+0 records out 00:09:28.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505481 s, 8.1 MB/s 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:28.570 13:09:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:09:28.829 /dev/nbd6 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.829 1+0 records in 00:09:28.829 1+0 records out 00:09:28.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0008227 s, 5.0 MB/s 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:28.829 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:09:29.088 /dev/nbd7 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:29.088 1+0 records in 00:09:29.088 1+0 records out 00:09:29.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651503 s, 6.3 MB/s 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:29.088 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:09:29.348 /dev/nbd8 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:29.348 1+0 records in 00:09:29.348 1+0 records out 00:09:29.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716646 s, 5.7 MB/s 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:29.348 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:09:29.607 /dev/nbd9 00:09:29.607 13:09:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:29.607 1+0 records in 00:09:29.607 1+0 records out 00:09:29.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000819284 s, 5.0 MB/s 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.607 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.866 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd0", 00:09:29.866 "bdev_name": "Malloc0" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd1", 00:09:29.866 "bdev_name": "Malloc1p0" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd10", 00:09:29.866 "bdev_name": "Malloc1p1" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd11", 00:09:29.866 "bdev_name": "Malloc2p0" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd12", 00:09:29.866 "bdev_name": "Malloc2p1" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd13", 00:09:29.866 "bdev_name": "Malloc2p2" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd14", 00:09:29.866 "bdev_name": "Malloc2p3" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd15", 00:09:29.866 "bdev_name": "Malloc2p4" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd2", 00:09:29.866 "bdev_name": "Malloc2p5" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd3", 00:09:29.866 "bdev_name": "Malloc2p6" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd4", 00:09:29.866 "bdev_name": "Malloc2p7" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd5", 00:09:29.866 "bdev_name": "TestPT" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd6", 00:09:29.866 "bdev_name": "raid0" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd7", 00:09:29.866 "bdev_name": "concat0" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd8", 00:09:29.866 "bdev_name": "raid1" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd9", 00:09:29.866 "bdev_name": "AIO0" 00:09:29.866 } 00:09:29.866 ]' 00:09:29.866 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd0", 00:09:29.866 "bdev_name": "Malloc0" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd1", 00:09:29.866 "bdev_name": "Malloc1p0" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd10", 00:09:29.866 "bdev_name": "Malloc1p1" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd11", 00:09:29.866 "bdev_name": "Malloc2p0" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd12", 00:09:29.866 "bdev_name": "Malloc2p1" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd13", 00:09:29.866 "bdev_name": "Malloc2p2" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd14", 00:09:29.866 "bdev_name": "Malloc2p3" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd15", 00:09:29.866 "bdev_name": "Malloc2p4" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd2", 00:09:29.866 "bdev_name": "Malloc2p5" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd3", 00:09:29.866 "bdev_name": "Malloc2p6" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd4", 00:09:29.866 "bdev_name": "Malloc2p7" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd5", 00:09:29.866 "bdev_name": "TestPT" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd6", 00:09:29.866 "bdev_name": "raid0" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd7", 00:09:29.866 "bdev_name": "concat0" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd8", 00:09:29.866 "bdev_name": "raid1" 00:09:29.866 }, 00:09:29.866 { 00:09:29.866 "nbd_device": "/dev/nbd9", 00:09:29.866 "bdev_name": "AIO0" 00:09:29.866 } 00:09:29.866 ]' 00:09:29.866 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.866 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:29.866 /dev/nbd1 00:09:29.866 /dev/nbd10 00:09:29.866 /dev/nbd11 00:09:29.866 /dev/nbd12 00:09:29.866 /dev/nbd13 00:09:29.866 /dev/nbd14 00:09:29.866 /dev/nbd15 00:09:29.866 /dev/nbd2 00:09:29.866 /dev/nbd3 00:09:29.866 /dev/nbd4 00:09:29.866 /dev/nbd5 00:09:29.866 /dev/nbd6 00:09:29.866 /dev/nbd7 00:09:29.867 /dev/nbd8 00:09:29.867 /dev/nbd9' 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:29.867 /dev/nbd1 00:09:29.867 /dev/nbd10 00:09:29.867 /dev/nbd11 00:09:29.867 /dev/nbd12 00:09:29.867 /dev/nbd13 00:09:29.867 /dev/nbd14 00:09:29.867 /dev/nbd15 00:09:29.867 /dev/nbd2 00:09:29.867 /dev/nbd3 00:09:29.867 /dev/nbd4 00:09:29.867 /dev/nbd5 00:09:29.867 /dev/nbd6 00:09:29.867 /dev/nbd7 00:09:29.867 /dev/nbd8 00:09:29.867 /dev/nbd9' 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:29.867 256+0 records in 00:09:29.867 256+0 records out 00:09:29.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00776145 s, 135 MB/s 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:29.867 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:30.126 256+0 records in 00:09:30.126 256+0 records out 00:09:30.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165473 s, 6.3 MB/s 00:09:30.126 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:30.126 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:30.384 256+0 records in 00:09:30.384 256+0 records out 00:09:30.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167332 s, 6.3 MB/s 00:09:30.384 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:30.384 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:30.384 256+0 records in 00:09:30.384 256+0 records out 00:09:30.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168659 s, 6.2 MB/s 00:09:30.384 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:30.384 13:09:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:30.643 256+0 records in 00:09:30.643 256+0 records out 00:09:30.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168669 s, 6.2 MB/s 00:09:30.643 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:30.643 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:30.902 256+0 records in 00:09:30.902 256+0 records out 00:09:30.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168353 s, 6.2 MB/s 00:09:30.902 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:30.902 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:30.902 256+0 records in 00:09:30.902 256+0 records out 00:09:30.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16848 s, 6.2 MB/s 00:09:30.902 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:30.902 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:31.161 256+0 records in 00:09:31.161 256+0 records out 00:09:31.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168885 s, 6.2 MB/s 00:09:31.161 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.161 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:09:31.419 256+0 records in 00:09:31.419 256+0 records out 00:09:31.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129372 s, 8.1 MB/s 00:09:31.419 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.419 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:09:31.419 256+0 records in 00:09:31.419 256+0 records out 00:09:31.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0930039 s, 11.3 MB/s 00:09:31.419 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.419 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:09:31.678 256+0 records in 00:09:31.678 256+0 records out 00:09:31.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168002 s, 6.2 MB/s 00:09:31.678 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.678 13:09:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:09:31.678 256+0 records in 00:09:31.678 256+0 records out 00:09:31.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.101164 s, 10.4 MB/s 00:09:31.678 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.678 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:09:31.937 256+0 records in 00:09:31.937 256+0 records out 00:09:31.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168253 s, 6.2 MB/s 00:09:31.937 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.937 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:09:31.937 256+0 records in 00:09:31.937 256+0 records out 00:09:31.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168689 s, 6.2 MB/s 00:09:31.937 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.937 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:09:32.196 256+0 records in 00:09:32.196 256+0 records out 00:09:32.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168474 s, 6.2 MB/s 00:09:32.196 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.196 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:09:32.455 256+0 records in 00:09:32.455 256+0 records out 00:09:32.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174133 s, 6.0 MB/s 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:09:32.455 256+0 records in 00:09:32.455 256+0 records out 00:09:32.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167777 s, 6.2 MB/s 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.455 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.714 13:09:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:09:32.714 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.714 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.715 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:32.974 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:32.974 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:32.974 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:32.974 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.974 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.974 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:32.974 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:32.974 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.974 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.974 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.234 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.234 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.234 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.234 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.234 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.234 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.234 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.234 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.234 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.234 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:33.527 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:33.527 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:33.527 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:33.527 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.527 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.527 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:33.527 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.527 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.527 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.527 13:09:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.813 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:34.072 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:34.072 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:34.072 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:34.072 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.072 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.072 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:34.072 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.072 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.072 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.072 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:34.332 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:34.332 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:34.332 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:34.332 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.332 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.332 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:34.332 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.332 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.332 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.332 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:09:34.591 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:09:34.591 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:09:34.591 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:09:34.591 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.591 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.591 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:09:34.591 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.591 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.591 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.591 13:09:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:34.851 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:34.851 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:34.851 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:34.851 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.851 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.851 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:34.851 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.851 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.851 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.851 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:35.110 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:35.110 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:35.110 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:35.110 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.110 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.110 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:35.110 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.110 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.110 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.110 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:35.369 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:35.369 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:35.369 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:35.369 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.369 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.369 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:35.369 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.369 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.369 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.369 13:09:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:35.938 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:35.939 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:35.939 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:35.939 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.939 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.939 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:35.939 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.939 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.939 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.939 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.198 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:09:36.458 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:09:36.458 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:09:36.458 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:09:36.458 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.458 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.458 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:09:36.458 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.458 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.458 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.458 13:09:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:09:36.717 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:09:36.717 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:09:36.717 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:09:36.717 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.717 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.717 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:09:36.717 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.717 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.717 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:36.717 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.717 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:37.286 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:37.545 malloc_lvol_verify 00:09:37.545 13:09:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:37.805 d160e262-e33a-4bd2-b96c-f51d62b87c73 00:09:37.805 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:38.064 d2744ee4-3697-4c91-b2bd-d4210f69e351 00:09:38.064 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:38.323 /dev/nbd0 00:09:38.323 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:38.323 mke2fs 1.46.5 (30-Dec-2021) 00:09:38.323 Discarding device blocks: 0/4096 done 00:09:38.323 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:38.323 00:09:38.323 Allocating group tables: 0/1 done 00:09:38.323 Writing inode tables: 0/1 done 00:09:38.323 Creating journal (1024 blocks): done 00:09:38.323 Writing superblocks and filesystem accounting information: 0/1 done 00:09:38.323 00:09:38.323 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:38.323 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:38.323 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.323 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:38.323 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:38.323 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:38.323 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.323 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 801279 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 801279 ']' 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 801279 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 801279 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 801279' 00:09:38.586 killing process with pid 801279 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@969 -- # kill 801279 00:09:38.586 13:09:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@974 -- # wait 801279 00:09:38.845 13:09:49 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:38.845 00:09:38.845 real 0m23.229s 00:09:38.845 user 0m28.709s 00:09:38.845 sys 0m13.304s 00:09:38.845 13:09:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.845 13:09:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:38.845 ************************************ 00:09:38.845 END TEST bdev_nbd 00:09:38.845 ************************************ 00:09:38.845 13:09:49 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:38.845 13:09:49 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:09:38.845 13:09:49 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:09:38.845 13:09:49 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:09:38.846 13:09:49 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.846 13:09:49 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.846 13:09:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:38.846 ************************************ 00:09:38.846 START TEST bdev_fio 00:09:38.846 ************************************ 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:38.846 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:09:38.846 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:09:39.105 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.106 13:09:49 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:09:39.106 ************************************ 00:09:39.106 START TEST bdev_fio_rw_verify 00:09:39.106 ************************************ 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:09:39.106 13:09:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:39.366 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.366 fio-3.35 00:09:39.366 Starting 16 threads 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:39.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.625 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:51.841 00:09:51.841 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=806209: Thu Jul 25 13:10:00 2024 00:09:51.841 read: IOPS=98.3k, BW=384MiB/s (403MB/s)(3841MiB/10001msec) 00:09:51.841 slat (usec): min=2, max=599, avg=33.58, stdev=13.10 00:09:51.841 clat (usec): min=9, max=1195, avg=265.23, stdev=117.65 00:09:51.841 lat (usec): min=15, max=1263, avg=298.81, stdev=124.36 00:09:51.841 clat percentiles (usec): 00:09:51.841 | 50.000th=[ 260], 99.000th=[ 510], 99.900th=[ 594], 99.990th=[ 848], 00:09:51.841 | 99.999th=[ 1090] 00:09:51.841 write: IOPS=154k, BW=603MiB/s (632MB/s)(5950MiB/9867msec); 0 zone resets 00:09:51.841 slat (usec): min=7, max=3331, avg=45.24, stdev=13.40 00:09:51.841 clat (usec): min=11, max=3644, avg=310.47, stdev=137.58 00:09:51.841 lat (usec): min=33, max=3676, avg=355.71, stdev=143.72 00:09:51.841 clat percentiles (usec): 00:09:51.841 | 50.000th=[ 297], 99.000th=[ 644], 99.900th=[ 857], 99.990th=[ 955], 00:09:51.841 | 99.999th=[ 1582] 00:09:51.841 bw ( KiB/s): min=501808, max=770098, per=99.04%, avg=611508.58, stdev=4446.05, samples=304 00:09:51.841 iops : min=125452, max=192519, avg=152876.74, stdev=1111.48, samples=304 00:09:51.841 lat (usec) : 10=0.01%, 20=0.01%, 50=0.69%, 100=4.91%, 250=36.37% 00:09:51.841 lat (usec) : 500=51.89%, 750=5.83%, 1000=0.28% 00:09:51.841 lat (msec) : 2=0.01%, 4=0.01% 00:09:51.841 cpu : usr=99.29%, sys=0.34%, ctx=581, majf=0, minf=3335 00:09:51.841 IO depths : 1=12.5%, 2=24.9%, 4=50.2%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.842 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.842 issued rwts: total=983333,1523130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.842 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:51.842 00:09:51.842 Run status group 0 (all jobs): 00:09:51.842 READ: bw=384MiB/s (403MB/s), 384MiB/s-384MiB/s (403MB/s-403MB/s), io=3841MiB (4028MB), run=10001-10001msec 00:09:51.842 WRITE: bw=603MiB/s (632MB/s), 603MiB/s-603MiB/s (632MB/s-632MB/s), io=5950MiB (6239MB), run=9867-9867msec 00:09:51.842 00:09:51.842 real 0m11.474s 00:09:51.842 user 2m53.182s 00:09:51.842 sys 0m1.261s 00:09:51.842 13:10:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.842 13:10:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:09:51.842 ************************************ 00:09:51.842 END TEST bdev_fio_rw_verify 00:09:51.842 ************************************ 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:09:51.842 13:10:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:09:51.843 13:10:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3b808569-5006-4bf6-b730-7f8599e97edf"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3b808569-5006-4bf6-b730-7f8599e97edf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "cf8815c4-9c4d-5211-aaf6-9e90f052b924"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cf8815c4-9c4d-5211-aaf6-9e90f052b924",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "3112f734-0144-5fc4-96af-64386df9f73c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3112f734-0144-5fc4-96af-64386df9f73c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "dc1ffe69-c04f-5a38-a70e-4e5b87f8a2b4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dc1ffe69-c04f-5a38-a70e-4e5b87f8a2b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "ae4d7ea3-29ff-556e-9b5c-afb8876b0370"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ae4d7ea3-29ff-556e-9b5c-afb8876b0370",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "ed6219d7-2950-585d-8727-1830b5245326"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ed6219d7-2950-585d-8727-1830b5245326",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ecf98253-73d7-53fe-9929-3414f513bac0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ecf98253-73d7-53fe-9929-3414f513bac0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "36e3592e-750f-5875-b52a-d2fd9dac43c3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "36e3592e-750f-5875-b52a-d2fd9dac43c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8adffbb4-1e2e-5d77-b384-2ca45ba86537"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8adffbb4-1e2e-5d77-b384-2ca45ba86537",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "fc26b94f-c978-5a03-b64f-2b2404ba7fbc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fc26b94f-c978-5a03-b64f-2b2404ba7fbc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "50fe448f-0154-5ce4-a63d-c3b2809e2983"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "50fe448f-0154-5ce4-a63d-c3b2809e2983",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "ba0fd3f2-41f1-5ed1-a815-aa41ef0b8867"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ba0fd3f2-41f1-5ed1-a815-aa41ef0b8867",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "abe5c8f1-ae94-4926-b1a2-7aa846e6925d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "abe5c8f1-ae94-4926-b1a2-7aa846e6925d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "abe5c8f1-ae94-4926-b1a2-7aa846e6925d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "77cafd76-8026-483a-bf9f-516cfe300e35",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "18d8d4d7-4fbe-4c6e-851c-85e73644d78a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "5029a9cf-a6cd-44ef-a865-e479debb31f4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5029a9cf-a6cd-44ef-a865-e479debb31f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5029a9cf-a6cd-44ef-a865-e479debb31f4",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "6ad27282-156f-4561-a619-c9fcafa17583",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "f1e8ac09-9112-46d7-9914-7e97ee1df239",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ee7e4046-a52e-4593-a529-cc5c8b44412b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ee7e4046-a52e-4593-a529-cc5c8b44412b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ee7e4046-a52e-4593-a529-cc5c8b44412b",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "3f0fce89-5697-409f-97d3-4387f9e498c9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "f6258df2-de48-484f-ac3b-46d1ab22219c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "d34ba14a-1aac-4406-88fc-8783c235370c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "d34ba14a-1aac-4406-88fc-8783c235370c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:09:51.843 13:10:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:09:51.843 Malloc1p0 00:09:51.843 Malloc1p1 00:09:51.843 Malloc2p0 00:09:51.843 Malloc2p1 00:09:51.843 Malloc2p2 00:09:51.843 Malloc2p3 00:09:51.843 Malloc2p4 00:09:51.843 Malloc2p5 00:09:51.843 Malloc2p6 00:09:51.843 Malloc2p7 00:09:51.843 TestPT 00:09:51.843 raid0 00:09:51.843 concat0 ]] 00:09:51.843 13:10:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:09:51.845 13:10:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3b808569-5006-4bf6-b730-7f8599e97edf"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3b808569-5006-4bf6-b730-7f8599e97edf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "cf8815c4-9c4d-5211-aaf6-9e90f052b924"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cf8815c4-9c4d-5211-aaf6-9e90f052b924",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "3112f734-0144-5fc4-96af-64386df9f73c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3112f734-0144-5fc4-96af-64386df9f73c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "dc1ffe69-c04f-5a38-a70e-4e5b87f8a2b4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dc1ffe69-c04f-5a38-a70e-4e5b87f8a2b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "ae4d7ea3-29ff-556e-9b5c-afb8876b0370"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ae4d7ea3-29ff-556e-9b5c-afb8876b0370",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "ed6219d7-2950-585d-8727-1830b5245326"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ed6219d7-2950-585d-8727-1830b5245326",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ecf98253-73d7-53fe-9929-3414f513bac0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ecf98253-73d7-53fe-9929-3414f513bac0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "36e3592e-750f-5875-b52a-d2fd9dac43c3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "36e3592e-750f-5875-b52a-d2fd9dac43c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8adffbb4-1e2e-5d77-b384-2ca45ba86537"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8adffbb4-1e2e-5d77-b384-2ca45ba86537",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "fc26b94f-c978-5a03-b64f-2b2404ba7fbc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fc26b94f-c978-5a03-b64f-2b2404ba7fbc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "50fe448f-0154-5ce4-a63d-c3b2809e2983"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "50fe448f-0154-5ce4-a63d-c3b2809e2983",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "ba0fd3f2-41f1-5ed1-a815-aa41ef0b8867"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ba0fd3f2-41f1-5ed1-a815-aa41ef0b8867",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "abe5c8f1-ae94-4926-b1a2-7aa846e6925d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "abe5c8f1-ae94-4926-b1a2-7aa846e6925d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "abe5c8f1-ae94-4926-b1a2-7aa846e6925d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "77cafd76-8026-483a-bf9f-516cfe300e35",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "18d8d4d7-4fbe-4c6e-851c-85e73644d78a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "5029a9cf-a6cd-44ef-a865-e479debb31f4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5029a9cf-a6cd-44ef-a865-e479debb31f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5029a9cf-a6cd-44ef-a865-e479debb31f4",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "6ad27282-156f-4561-a619-c9fcafa17583",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "f1e8ac09-9112-46d7-9914-7e97ee1df239",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ee7e4046-a52e-4593-a529-cc5c8b44412b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ee7e4046-a52e-4593-a529-cc5c8b44412b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ee7e4046-a52e-4593-a529-cc5c8b44412b",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "3f0fce89-5697-409f-97d3-4387f9e498c9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "f6258df2-de48-484f-ac3b-46d1ab22219c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "d34ba14a-1aac-4406-88fc-8783c235370c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "d34ba14a-1aac-4406-88fc-8783c235370c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.845 13:10:01 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:09:51.845 ************************************ 00:09:51.845 START TEST bdev_fio_trim 00:09:51.845 ************************************ 00:09:51.845 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:09:51.846 13:10:01 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:51.846 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.846 fio-3.35 00:09:51.846 Starting 14 threads 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.846 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:51.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.847 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:51.847 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.847 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:51.847 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.847 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:51.847 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.847 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:04.088 00:10:04.088 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=808476: Thu Jul 25 13:10:12 2024 00:10:04.088 write: IOPS=143k, BW=560MiB/s (588MB/s)(5605MiB/10001msec); 0 zone resets 00:10:04.088 slat (usec): min=3, max=559, avg=34.57, stdev= 8.96 00:10:04.088 clat (usec): min=39, max=3392, avg=242.04, stdev=80.70 00:10:04.088 lat (usec): min=53, max=3426, avg=276.61, stdev=83.47 00:10:04.088 clat percentiles (usec): 00:10:04.088 | 50.000th=[ 237], 99.000th=[ 416], 99.900th=[ 498], 99.990th=[ 734], 00:10:04.088 | 99.999th=[ 898] 00:10:04.088 bw ( KiB/s): min=485520, max=754117, per=100.00%, avg=576729.95, stdev=5211.22, samples=266 00:10:04.088 iops : min=121376, max=188527, avg=144182.26, stdev=1302.81, samples=266 00:10:04.088 trim: IOPS=143k, BW=560MiB/s (588MB/s)(5605MiB/10001msec); 0 zone resets 00:10:04.088 slat (usec): min=5, max=3136, avg=23.94, stdev= 6.82 00:10:04.088 clat (usec): min=6, max=3426, avg=276.02, stdev=84.29 00:10:04.088 lat (usec): min=18, max=3448, avg=299.96, stdev=86.53 00:10:04.088 clat percentiles (usec): 00:10:04.088 | 50.000th=[ 269], 99.000th=[ 457], 99.900th=[ 553], 99.990th=[ 816], 00:10:04.088 | 99.999th=[ 988] 00:10:04.088 bw ( KiB/s): min=485504, max=754181, per=100.00%, avg=576729.53, stdev=5212.52, samples=266 00:10:04.088 iops : min=121376, max=188543, avg=144182.37, stdev=1303.11, samples=266 00:10:04.088 lat (usec) : 10=0.01%, 20=0.01%, 50=0.05%, 100=1.00%, 250=48.07% 00:10:04.088 lat (usec) : 500=50.71%, 750=0.15%, 1000=0.01% 00:10:04.088 lat (msec) : 2=0.01%, 4=0.01% 00:10:04.088 cpu : usr=99.62%, sys=0.00%, ctx=500, majf=0, minf=1079 00:10:04.088 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.088 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.088 issued rwts: total=0,1434979,1434983,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.088 latency : target=0, window=0, percentile=100.00%, depth=8 00:10:04.088 00:10:04.088 Run status group 0 (all jobs): 00:10:04.088 WRITE: bw=560MiB/s (588MB/s), 560MiB/s-560MiB/s (588MB/s-588MB/s), io=5605MiB (5878MB), run=10001-10001msec 00:10:04.088 TRIM: bw=560MiB/s (588MB/s), 560MiB/s-560MiB/s (588MB/s-588MB/s), io=5605MiB (5878MB), run=10001-10001msec 00:10:04.088 00:10:04.088 real 0m11.546s 00:10:04.088 user 2m34.138s 00:10:04.088 sys 0m0.639s 00:10:04.088 13:10:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.088 13:10:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:10:04.088 ************************************ 00:10:04.088 END TEST bdev_fio_trim 00:10:04.088 ************************************ 00:10:04.088 13:10:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:10:04.088 13:10:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:10:04.088 13:10:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:10:04.088 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:10:04.088 13:10:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:10:04.088 00:10:04.088 real 0m23.397s 00:10:04.088 user 5m27.534s 00:10:04.088 sys 0m2.096s 00:10:04.088 13:10:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.088 13:10:12 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:10:04.088 ************************************ 00:10:04.088 END TEST bdev_fio 00:10:04.088 ************************************ 00:10:04.088 13:10:12 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:04.088 13:10:12 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:04.088 13:10:12 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:10:04.088 13:10:12 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.088 13:10:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:04.088 ************************************ 00:10:04.088 START TEST bdev_verify 00:10:04.088 ************************************ 00:10:04.088 13:10:12 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:04.088 [2024-07-25 13:10:12.828073] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:10:04.088 [2024-07-25 13:10:12.828133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810296 ] 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:04.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.088 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.089 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:04.089 [2024-07-25 13:10:12.959288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:04.089 [2024-07-25 13:10:13.045035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.089 [2024-07-25 13:10:13.045041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.089 [2024-07-25 13:10:13.183924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:04.089 [2024-07-25 13:10:13.183975] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:04.089 [2024-07-25 13:10:13.183988] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:04.089 [2024-07-25 13:10:13.191935] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:04.089 [2024-07-25 13:10:13.191960] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:04.089 [2024-07-25 13:10:13.199948] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:04.089 [2024-07-25 13:10:13.199971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:04.089 [2024-07-25 13:10:13.271265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:04.089 [2024-07-25 13:10:13.271311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.089 [2024-07-25 13:10:13.271326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20c8b60 00:10:04.089 [2024-07-25 13:10:13.271338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.089 [2024-07-25 13:10:13.272798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.089 [2024-07-25 13:10:13.272826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:04.089 Running I/O for 5 seconds... 00:10:09.366 00:10:09.366 Latency(us) 00:10:09.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.366 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x0 length 0x1000 00:10:09.366 Malloc0 : 5.20 1108.53 4.33 0.00 0.00 115221.43 668.47 270113.18 00:10:09.366 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x1000 length 0x1000 00:10:09.366 Malloc0 : 5.18 1086.82 4.25 0.00 0.00 117515.87 514.46 414397.24 00:10:09.366 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x0 length 0x800 00:10:09.366 Malloc1p0 : 5.20 566.22 2.21 0.00 0.00 224723.08 3263.69 253335.96 00:10:09.366 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x800 length 0x800 00:10:09.366 Malloc1p0 : 5.18 567.89 2.22 0.00 0.00 224121.23 3263.69 241591.91 00:10:09.366 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x0 length 0x800 00:10:09.366 Malloc1p1 : 5.20 565.86 2.21 0.00 0.00 224137.49 3355.44 246625.08 00:10:09.366 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x800 length 0x800 00:10:09.366 Malloc1p1 : 5.19 567.67 2.22 0.00 0.00 223450.54 3355.44 233203.30 00:10:09.366 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x0 length 0x200 00:10:09.366 Malloc2p0 : 5.21 565.53 2.21 0.00 0.00 223554.29 3316.12 239914.19 00:10:09.366 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x200 length 0x200 00:10:09.366 Malloc2p0 : 5.19 567.44 2.22 0.00 0.00 222791.84 3316.12 228170.14 00:10:09.366 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x0 length 0x200 00:10:09.366 Malloc2p1 : 5.21 565.31 2.21 0.00 0.00 222916.58 3381.66 234881.02 00:10:09.366 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x200 length 0x200 00:10:09.366 Malloc2p1 : 5.19 567.21 2.22 0.00 0.00 222157.06 3381.66 219781.53 00:10:09.366 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x0 length 0x200 00:10:09.366 Malloc2p2 : 5.21 565.10 2.21 0.00 0.00 222272.14 3263.69 229847.86 00:10:09.366 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x200 length 0x200 00:10:09.366 Malloc2p2 : 5.19 566.97 2.21 0.00 0.00 221514.42 3303.01 214748.36 00:10:09.366 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x0 length 0x200 00:10:09.366 Malloc2p3 : 5.21 564.88 2.21 0.00 0.00 221669.76 3381.66 224814.69 00:10:09.366 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x200 length 0x200 00:10:09.366 Malloc2p3 : 5.19 566.75 2.21 0.00 0.00 220918.31 3381.66 209715.20 00:10:09.366 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x0 length 0x200 00:10:09.366 Malloc2p4 : 5.21 564.67 2.21 0.00 0.00 221048.27 3276.80 221459.25 00:10:09.366 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x200 length 0x200 00:10:09.366 Malloc2p4 : 5.20 566.52 2.21 0.00 0.00 220310.13 3316.12 204682.04 00:10:09.366 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x0 length 0x200 00:10:09.366 Malloc2p5 : 5.22 564.45 2.20 0.00 0.00 220450.79 3316.12 219781.53 00:10:09.366 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x200 length 0x200 00:10:09.366 Malloc2p5 : 5.20 566.17 2.21 0.00 0.00 219765.77 3289.91 203004.31 00:10:09.366 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.366 Verification LBA range: start 0x0 length 0x200 00:10:09.367 Malloc2p6 : 5.22 564.24 2.20 0.00 0.00 219913.05 3355.44 213070.64 00:10:09.367 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x200 length 0x200 00:10:09.367 Malloc2p6 : 5.20 565.81 2.21 0.00 0.00 219278.53 3316.12 199648.87 00:10:09.367 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x0 length 0x200 00:10:09.367 Malloc2p7 : 5.22 564.03 2.20 0.00 0.00 219307.37 3381.66 205520.90 00:10:09.367 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x200 length 0x200 00:10:09.367 Malloc2p7 : 5.26 583.58 2.28 0.00 0.00 212002.97 3381.66 194615.71 00:10:09.367 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x0 length 0x1000 00:10:09.367 TestPT : 5.27 561.27 2.19 0.00 0.00 219209.04 10747.90 204682.04 00:10:09.367 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x1000 length 0x1000 00:10:09.367 TestPT : 5.27 559.05 2.18 0.00 0.00 220418.72 13631.49 270113.18 00:10:09.367 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x0 length 0x2000 00:10:09.367 raid0 : 5.27 582.64 2.28 0.00 0.00 210593.58 3158.84 176160.77 00:10:09.367 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x2000 length 0x2000 00:10:09.367 raid0 : 5.27 583.13 2.28 0.00 0.00 210489.95 3198.16 162739.00 00:10:09.367 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x0 length 0x2000 00:10:09.367 concat0 : 5.28 582.20 2.27 0.00 0.00 210162.43 3237.48 171966.46 00:10:09.367 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x2000 length 0x2000 00:10:09.367 concat0 : 5.27 582.68 2.28 0.00 0.00 210039.56 3250.59 166933.30 00:10:09.367 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x0 length 0x1000 00:10:09.367 raid1 : 5.28 581.82 2.27 0.00 0.00 209648.01 3696.23 176999.63 00:10:09.367 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x1000 length 0x1000 00:10:09.367 raid1 : 5.28 582.23 2.27 0.00 0.00 209538.38 3801.09 174483.05 00:10:09.367 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x0 length 0x4e2 00:10:09.367 AIO0 : 5.28 581.66 2.27 0.00 0.00 209030.20 1461.45 183710.52 00:10:09.367 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:09.367 Verification LBA range: start 0x4e2 length 0x4e2 00:10:09.367 AIO0 : 5.28 581.94 2.27 0.00 0.00 208984.02 1500.77 181193.93 00:10:09.367 =================================================================================================================== 00:10:09.367 Total : 19310.28 75.43 0.00 0.00 206559.70 514.46 414397.24 00:10:09.367 00:10:09.367 real 0m6.400s 00:10:09.367 user 0m11.920s 00:10:09.367 sys 0m0.372s 00:10:09.367 13:10:19 blockdev_general.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.367 13:10:19 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:09.367 ************************************ 00:10:09.367 END TEST bdev_verify 00:10:09.367 ************************************ 00:10:09.367 13:10:19 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:09.367 13:10:19 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:10:09.367 13:10:19 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.367 13:10:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:09.367 ************************************ 00:10:09.367 START TEST bdev_verify_big_io 00:10:09.367 ************************************ 00:10:09.367 13:10:19 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:09.367 [2024-07-25 13:10:19.312485] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:10:09.367 [2024-07-25 13:10:19.312541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811439 ] 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.367 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:09.367 [2024-07-25 13:10:19.444066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:09.367 [2024-07-25 13:10:19.528549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.367 [2024-07-25 13:10:19.528554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.367 [2024-07-25 13:10:19.673706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:09.367 [2024-07-25 13:10:19.673760] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:09.367 [2024-07-25 13:10:19.673773] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:09.367 [2024-07-25 13:10:19.681717] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:09.367 [2024-07-25 13:10:19.681742] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:09.367 [2024-07-25 13:10:19.689730] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:09.367 [2024-07-25 13:10:19.689753] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:09.367 [2024-07-25 13:10:19.760935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:09.367 [2024-07-25 13:10:19.760984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.367 [2024-07-25 13:10:19.760998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2104b60 00:10:09.367 [2024-07-25 13:10:19.761010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.368 [2024-07-25 13:10:19.762481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.368 [2024-07-25 13:10:19.762509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:09.627 [2024-07-25 13:10:19.923518] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.924619] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.926243] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.927326] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.928823] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.929762] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.931214] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.932656] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.933601] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.935028] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.935988] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.937427] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.938292] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.939499] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.940264] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.941472] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:10:09.627 [2024-07-25 13:10:19.961918] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:10:09.627 [2024-07-25 13:10:19.963638] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:10:09.627 Running I/O for 5 seconds... 00:10:17.748 00:10:17.748 Latency(us) 00:10:17.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.748 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x0 length 0x100 00:10:17.748 Malloc0 : 5.81 176.13 11.01 0.00 0.00 710509.94 809.37 1919313.51 00:10:17.748 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x100 length 0x100 00:10:17.748 Malloc0 : 6.07 147.70 9.23 0.00 0.00 852064.29 799.54 2308544.92 00:10:17.748 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x0 length 0x80 00:10:17.748 Malloc1p0 : 6.76 35.52 2.22 0.00 0.00 3212249.21 1350.04 5395552.67 00:10:17.748 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x80 length 0x80 00:10:17.748 Malloc1p0 : 6.30 87.05 5.44 0.00 0.00 1367112.78 2175.80 2751463.42 00:10:17.748 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x0 length 0x80 00:10:17.748 Malloc1p1 : 6.76 35.51 2.22 0.00 0.00 3108720.05 1356.60 5207647.85 00:10:17.748 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x80 length 0x80 00:10:17.748 Malloc1p1 : 6.75 35.57 2.22 0.00 0.00 3157620.09 1389.36 5502926.85 00:10:17.748 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x0 length 0x20 00:10:17.748 Malloc2p0 : 6.21 25.76 1.61 0.00 0.00 1101090.81 566.89 2080374.78 00:10:17.748 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x20 length 0x20 00:10:17.748 Malloc2p0 : 6.20 23.21 1.45 0.00 0.00 1225102.42 583.27 2026687.69 00:10:17.748 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x0 length 0x20 00:10:17.748 Malloc2p1 : 6.21 25.75 1.61 0.00 0.00 1092405.42 583.27 2053531.24 00:10:17.748 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x20 length 0x20 00:10:17.748 Malloc2p1 : 6.21 23.20 1.45 0.00 0.00 1214205.37 589.82 1999844.15 00:10:17.748 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x0 length 0x20 00:10:17.748 Malloc2p2 : 6.22 25.73 1.61 0.00 0.00 1082959.74 573.44 2026687.69 00:10:17.748 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x20 length 0x20 00:10:17.748 Malloc2p2 : 6.21 23.20 1.45 0.00 0.00 1203169.68 579.99 1973000.60 00:10:17.748 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x0 length 0x20 00:10:17.748 Malloc2p3 : 6.22 25.73 1.61 0.00 0.00 1072997.30 576.72 1999844.15 00:10:17.748 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x20 length 0x20 00:10:17.748 Malloc2p3 : 6.21 23.19 1.45 0.00 0.00 1192048.99 586.55 1946157.06 00:10:17.748 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x0 length 0x20 00:10:17.748 Malloc2p4 : 6.22 25.72 1.61 0.00 0.00 1063269.94 573.44 1973000.60 00:10:17.748 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x20 length 0x20 00:10:17.748 Malloc2p4 : 6.21 23.19 1.45 0.00 0.00 1181505.37 589.82 1919313.51 00:10:17.748 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x0 length 0x20 00:10:17.748 Malloc2p5 : 6.22 25.72 1.61 0.00 0.00 1052830.89 576.72 1946157.06 00:10:17.748 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x20 length 0x20 00:10:17.748 Malloc2p5 : 6.21 23.18 1.45 0.00 0.00 1170857.84 586.55 1892469.96 00:10:17.748 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x0 length 0x20 00:10:17.748 Malloc2p6 : 6.22 25.71 1.61 0.00 0.00 1043269.95 579.99 1919313.51 00:10:17.748 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x20 length 0x20 00:10:17.748 Malloc2p6 : 6.21 23.17 1.45 0.00 0.00 1160240.89 596.38 1865626.42 00:10:17.748 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x0 length 0x20 00:10:17.748 Malloc2p7 : 6.22 25.71 1.61 0.00 0.00 1033390.46 576.72 1892469.96 00:10:17.748 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x20 length 0x20 00:10:17.748 Malloc2p7 : 6.30 25.41 1.59 0.00 0.00 1056507.36 599.65 1838782.87 00:10:17.748 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.748 Verification LBA range: start 0x0 length 0x100 00:10:17.749 TestPT : 6.85 39.73 2.48 0.00 0.00 2568232.01 1343.49 4831838.21 00:10:17.749 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.749 Verification LBA range: start 0x100 length 0x100 00:10:17.749 TestPT : 6.55 34.53 2.16 0.00 0.00 2991511.24 79691.78 3489660.93 00:10:17.749 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.749 Verification LBA range: start 0x0 length 0x200 00:10:17.749 raid0 : 6.85 42.05 2.63 0.00 0.00 2370074.27 1435.24 4643933.39 00:10:17.749 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.749 Verification LBA range: start 0x200 length 0x200 00:10:17.749 raid0 : 6.84 39.79 2.49 0.00 0.00 2491118.64 1461.45 4751307.57 00:10:17.749 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.749 Verification LBA range: start 0x0 length 0x200 00:10:17.749 concat0 : 6.82 46.90 2.93 0.00 0.00 2070114.97 1435.24 4456028.57 00:10:17.749 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.749 Verification LBA range: start 0x200 length 0x200 00:10:17.749 concat0 : 6.84 46.80 2.93 0.00 0.00 2088661.08 1644.95 4563402.75 00:10:17.749 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.749 Verification LBA range: start 0x0 length 0x100 00:10:17.749 raid1 : 6.82 61.25 3.83 0.00 0.00 1552843.93 1861.22 4268123.75 00:10:17.749 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.749 Verification LBA range: start 0x100 length 0x100 00:10:17.749 raid1 : 6.80 65.45 4.09 0.00 0.00 1461375.67 1887.44 4402341.48 00:10:17.749 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:10:17.749 Verification LBA range: start 0x0 length 0x4e 00:10:17.749 AIO0 : 6.85 71.09 4.44 0.00 0.00 795956.72 740.56 2751463.42 00:10:17.749 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:10:17.749 Verification LBA range: start 0x4e length 0x4e 00:10:17.749 AIO0 : 6.84 57.90 3.62 0.00 0.00 982486.65 394.85 2899102.92 00:10:17.749 =================================================================================================================== 00:10:17.749 Total : 1416.54 88.53 0.00 0.00 1475170.10 394.85 5502926.85 00:10:17.749 00:10:17.749 real 0m8.024s 00:10:17.749 user 0m15.135s 00:10:17.749 sys 0m0.390s 00:10:17.749 13:10:27 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.749 13:10:27 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.749 ************************************ 00:10:17.749 END TEST bdev_verify_big_io 00:10:17.749 ************************************ 00:10:17.749 13:10:27 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:17.749 13:10:27 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:17.749 13:10:27 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.749 13:10:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:17.749 ************************************ 00:10:17.749 START TEST bdev_write_zeroes 00:10:17.749 ************************************ 00:10:17.749 13:10:27 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:17.749 [2024-07-25 13:10:27.424060] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:10:17.749 [2024-07-25 13:10:27.424118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812780 ] 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:17.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.749 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:17.749 [2024-07-25 13:10:27.555119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.749 [2024-07-25 13:10:27.638758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.749 [2024-07-25 13:10:27.787744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:17.749 [2024-07-25 13:10:27.787791] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:17.749 [2024-07-25 13:10:27.787804] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:17.749 [2024-07-25 13:10:27.795760] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:17.749 [2024-07-25 13:10:27.795789] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:17.749 [2024-07-25 13:10:27.803766] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:17.749 [2024-07-25 13:10:27.803789] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:17.749 [2024-07-25 13:10:27.874784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:17.749 [2024-07-25 13:10:27.874830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.749 [2024-07-25 13:10:27.874845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1778390 00:10:17.749 [2024-07-25 13:10:27.874856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.749 [2024-07-25 13:10:27.876297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.749 [2024-07-25 13:10:27.876326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:17.749 Running I/O for 1 seconds... 00:10:18.686 00:10:18.686 Latency(us) 00:10:18.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.686 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 Malloc0 : 1.04 5401.09 21.10 0.00 0.00 23702.29 616.04 39636.17 00:10:18.686 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 Malloc1p0 : 1.04 5393.95 21.07 0.00 0.00 23692.61 829.03 38797.31 00:10:18.686 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 Malloc1p1 : 1.05 5386.88 21.04 0.00 0.00 23676.33 832.31 37958.45 00:10:18.686 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 Malloc2p0 : 1.05 5379.78 21.01 0.00 0.00 23659.54 838.86 37119.59 00:10:18.686 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 Malloc2p1 : 1.05 5372.73 20.99 0.00 0.00 23640.04 832.31 36280.73 00:10:18.686 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 Malloc2p2 : 1.05 5365.74 20.96 0.00 0.00 23623.45 835.58 35441.87 00:10:18.686 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 Malloc2p3 : 1.05 5358.68 20.93 0.00 0.00 23606.66 838.86 34603.01 00:10:18.686 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 Malloc2p4 : 1.05 5351.70 20.91 0.00 0.00 23589.61 832.31 33764.15 00:10:18.686 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 Malloc2p5 : 1.05 5344.77 20.88 0.00 0.00 23571.03 832.31 32925.29 00:10:18.686 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 Malloc2p6 : 1.06 5337.77 20.85 0.00 0.00 23553.24 829.03 32086.43 00:10:18.686 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 Malloc2p7 : 1.06 5330.86 20.82 0.00 0.00 23536.45 829.03 31457.28 00:10:18.686 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 TestPT : 1.06 5323.99 20.80 0.00 0.00 23515.24 865.08 30408.70 00:10:18.686 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 raid0 : 1.06 5315.98 20.77 0.00 0.00 23488.32 1494.22 28940.70 00:10:18.686 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 concat0 : 1.06 5308.13 20.73 0.00 0.00 23446.10 1474.56 27472.69 00:10:18.686 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 raid1 : 1.06 5298.33 20.70 0.00 0.00 23400.62 2372.40 25060.97 00:10:18.686 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.686 AIO0 : 1.06 5292.41 20.67 0.00 0.00 23320.76 956.83 24222.11 00:10:18.686 =================================================================================================================== 00:10:18.686 Total : 85562.80 334.23 0.00 0.00 23563.89 616.04 39636.17 00:10:19.254 00:10:19.254 real 0m2.125s 00:10:19.254 user 0m1.751s 00:10:19.254 sys 0m0.311s 00:10:19.254 13:10:29 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.254 13:10:29 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:19.254 ************************************ 00:10:19.254 END TEST bdev_write_zeroes 00:10:19.254 ************************************ 00:10:19.254 13:10:29 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:19.254 13:10:29 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:19.254 13:10:29 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.254 13:10:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:19.254 ************************************ 00:10:19.254 START TEST bdev_json_nonenclosed 00:10:19.254 ************************************ 00:10:19.254 13:10:29 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:19.254 [2024-07-25 13:10:29.643701] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:10:19.254 [2024-07-25 13:10:29.643762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813131 ] 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:19.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.254 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:19.514 [2024-07-25 13:10:29.778912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.514 [2024-07-25 13:10:29.862641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.514 [2024-07-25 13:10:29.862705] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:19.514 [2024-07-25 13:10:29.862724] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:19.514 [2024-07-25 13:10:29.862735] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:19.514 00:10:19.514 real 0m0.367s 00:10:19.514 user 0m0.202s 00:10:19.514 sys 0m0.161s 00:10:19.514 13:10:29 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.514 13:10:29 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:19.514 ************************************ 00:10:19.514 END TEST bdev_json_nonenclosed 00:10:19.514 ************************************ 00:10:19.514 13:10:29 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:19.514 13:10:29 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:19.514 13:10:29 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.514 13:10:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:19.773 ************************************ 00:10:19.773 START TEST bdev_json_nonarray 00:10:19.773 ************************************ 00:10:19.773 13:10:30 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:19.773 [2024-07-25 13:10:30.094908] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:10:19.773 [2024-07-25 13:10:30.094965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813337 ] 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:19.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.773 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:19.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.774 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:19.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.774 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:19.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.774 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:19.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.774 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:19.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.774 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:19.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.774 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:19.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.774 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:19.774 [2024-07-25 13:10:30.225561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.033 [2024-07-25 13:10:30.309154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.033 [2024-07-25 13:10:30.309221] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:20.033 [2024-07-25 13:10:30.309237] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:20.033 [2024-07-25 13:10:30.309248] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:20.033 00:10:20.033 real 0m0.358s 00:10:20.033 user 0m0.211s 00:10:20.033 sys 0m0.145s 00:10:20.033 13:10:30 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.033 13:10:30 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:20.033 ************************************ 00:10:20.033 END TEST bdev_json_nonarray 00:10:20.033 ************************************ 00:10:20.033 13:10:30 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:10:20.033 13:10:30 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:10:20.033 13:10:30 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.033 13:10:30 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.033 13:10:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:20.033 ************************************ 00:10:20.033 START TEST bdev_qos 00:10:20.033 ************************************ 00:10:20.033 13:10:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # qos_test_suite '' 00:10:20.033 13:10:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=813368 00:10:20.033 13:10:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 813368' 00:10:20.033 Process qos testing pid: 813368 00:10:20.033 13:10:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:10:20.033 13:10:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:10:20.033 13:10:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 813368 00:10:20.033 13:10:30 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # '[' -z 813368 ']' 00:10:20.033 13:10:30 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.033 13:10:30 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.033 13:10:30 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.033 13:10:30 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.033 13:10:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:20.292 [2024-07-25 13:10:30.541916] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:10:20.292 [2024-07-25 13:10:30.541976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813368 ] 00:10:20.292 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.292 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:20.292 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.292 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:20.292 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.292 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:20.292 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.292 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:20.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.293 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:20.293 [2024-07-25 13:10:30.661914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.293 [2024-07-25 13:10:30.746802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # return 0 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:21.229 Malloc_0 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_0 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:21.229 [ 00:10:21.229 { 00:10:21.229 "name": "Malloc_0", 00:10:21.229 "aliases": [ 00:10:21.229 "2aa2e89a-f22d-419e-9736-82e7be72607c" 00:10:21.229 ], 00:10:21.229 "product_name": "Malloc disk", 00:10:21.229 "block_size": 512, 00:10:21.229 "num_blocks": 262144, 00:10:21.229 "uuid": "2aa2e89a-f22d-419e-9736-82e7be72607c", 00:10:21.229 "assigned_rate_limits": { 00:10:21.229 "rw_ios_per_sec": 0, 00:10:21.229 "rw_mbytes_per_sec": 0, 00:10:21.229 "r_mbytes_per_sec": 0, 00:10:21.229 "w_mbytes_per_sec": 0 00:10:21.229 }, 00:10:21.229 "claimed": false, 00:10:21.229 "zoned": false, 00:10:21.229 "supported_io_types": { 00:10:21.229 "read": true, 00:10:21.229 "write": true, 00:10:21.229 "unmap": true, 00:10:21.229 "flush": true, 00:10:21.229 "reset": true, 00:10:21.229 "nvme_admin": false, 00:10:21.229 "nvme_io": false, 00:10:21.229 "nvme_io_md": false, 00:10:21.229 "write_zeroes": true, 00:10:21.229 "zcopy": true, 00:10:21.229 "get_zone_info": false, 00:10:21.229 "zone_management": false, 00:10:21.229 "zone_append": false, 00:10:21.229 "compare": false, 00:10:21.229 "compare_and_write": false, 00:10:21.229 "abort": true, 00:10:21.229 "seek_hole": false, 00:10:21.229 "seek_data": false, 00:10:21.229 "copy": true, 00:10:21.229 "nvme_iov_md": false 00:10:21.229 }, 00:10:21.229 "memory_domains": [ 00:10:21.229 { 00:10:21.229 "dma_device_id": "system", 00:10:21.229 "dma_device_type": 1 00:10:21.229 }, 00:10:21.229 { 00:10:21.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.229 "dma_device_type": 2 00:10:21.229 } 00:10:21.229 ], 00:10:21.229 "driver_specific": {} 00:10:21.229 } 00:10:21.229 ] 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:21.229 Null_1 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Null_1 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:21.229 [ 00:10:21.229 { 00:10:21.229 "name": "Null_1", 00:10:21.229 "aliases": [ 00:10:21.229 "89eea2d6-c0e7-47d8-bc53-c071cba3882c" 00:10:21.229 ], 00:10:21.229 "product_name": "Null disk", 00:10:21.229 "block_size": 512, 00:10:21.229 "num_blocks": 262144, 00:10:21.229 "uuid": "89eea2d6-c0e7-47d8-bc53-c071cba3882c", 00:10:21.229 "assigned_rate_limits": { 00:10:21.229 "rw_ios_per_sec": 0, 00:10:21.229 "rw_mbytes_per_sec": 0, 00:10:21.229 "r_mbytes_per_sec": 0, 00:10:21.229 "w_mbytes_per_sec": 0 00:10:21.229 }, 00:10:21.229 "claimed": false, 00:10:21.229 "zoned": false, 00:10:21.229 "supported_io_types": { 00:10:21.229 "read": true, 00:10:21.229 "write": true, 00:10:21.229 "unmap": false, 00:10:21.229 "flush": false, 00:10:21.229 "reset": true, 00:10:21.229 "nvme_admin": false, 00:10:21.229 "nvme_io": false, 00:10:21.229 "nvme_io_md": false, 00:10:21.229 "write_zeroes": true, 00:10:21.229 "zcopy": false, 00:10:21.229 "get_zone_info": false, 00:10:21.229 "zone_management": false, 00:10:21.229 "zone_append": false, 00:10:21.229 "compare": false, 00:10:21.229 "compare_and_write": false, 00:10:21.229 "abort": true, 00:10:21.229 "seek_hole": false, 00:10:21.229 "seek_data": false, 00:10:21.229 "copy": false, 00:10:21.229 "nvme_iov_md": false 00:10:21.229 }, 00:10:21.229 "driver_specific": {} 00:10:21.229 } 00:10:21.229 ] 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:10:21.229 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:10:21.230 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:10:21.230 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:21.230 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:10:21.230 13:10:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:10:21.230 Running I/O for 60 seconds... 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 68666.53 274666.13 0.00 0.00 276480.00 0.00 0.00 ' 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=68666.53 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 68666 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=68666 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=17000 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 17000 -gt 1000 ']' 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 17000 Malloc_0 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 17000 IOPS Malloc_0 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.500 13:10:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:26.500 ************************************ 00:10:26.500 START TEST bdev_qos_iops 00:10:26.500 ************************************ 00:10:26.500 13:10:36 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # run_qos_test 17000 IOPS Malloc_0 00:10:26.500 13:10:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=17000 00:10:26.500 13:10:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:10:26.500 13:10:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:10:26.500 13:10:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:10:26.500 13:10:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:10:26.500 13:10:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:10:26.500 13:10:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:26.500 13:10:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:10:26.501 13:10:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 16997.46 67989.83 0.00 0.00 68952.00 0.00 0.00 ' 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=16997.46 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 16997 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=16997 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=15300 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=18700 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 16997 -lt 15300 ']' 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 16997 -gt 18700 ']' 00:10:31.852 00:10:31.852 real 0m5.247s 00:10:31.852 user 0m0.106s 00:10:31.852 sys 0m0.048s 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.852 13:10:42 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:10:31.852 ************************************ 00:10:31.852 END TEST bdev_qos_iops 00:10:31.852 ************************************ 00:10:31.852 13:10:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:10:31.852 13:10:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:10:31.852 13:10:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:10:31.852 13:10:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:10:31.852 13:10:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:31.852 13:10:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:10:31.852 13:10:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 21010.40 84041.60 0.00 0.00 86016.00 0.00 0.00 ' 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=86016.00 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 86016 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=86016 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=8 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 8 -lt 2 ']' 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 8 Null_1 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 8 BANDWIDTH Null_1 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.124 13:10:47 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:37.124 ************************************ 00:10:37.124 START TEST bdev_qos_bw 00:10:37.124 ************************************ 00:10:37.124 13:10:47 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # run_qos_test 8 BANDWIDTH Null_1 00:10:37.124 13:10:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=8 00:10:37.124 13:10:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:10:37.124 13:10:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:10:37.124 13:10:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:10:37.124 13:10:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:10:37.124 13:10:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:10:37.124 13:10:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:37.124 13:10:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:10:37.124 13:10:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 2048.10 8192.41 0.00 0.00 8356.00 0.00 0.00 ' 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=8356.00 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 8356 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=8356 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=8192 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=7372 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=9011 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8356 -lt 7372 ']' 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8356 -gt 9011 ']' 00:10:42.393 00:10:42.393 real 0m5.252s 00:10:42.393 user 0m0.110s 00:10:42.393 sys 0m0.043s 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:10:42.393 ************************************ 00:10:42.393 END TEST bdev_qos_bw 00:10:42.393 ************************************ 00:10:42.393 13:10:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:10:42.393 13:10:52 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.393 13:10:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:42.393 13:10:52 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.393 13:10:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:10:42.393 13:10:52 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:42.393 13:10:52 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.393 13:10:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:42.393 ************************************ 00:10:42.393 START TEST bdev_qos_ro_bw 00:10:42.393 ************************************ 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:10:42.393 13:10:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 512.01 2048.06 0.00 0.00 2060.00 0.00 0.00 ' 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2060.00 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2060 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2060 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2060 -lt 1843 ']' 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2060 -gt 2252 ']' 00:10:47.675 00:10:47.675 real 0m5.169s 00:10:47.675 user 0m0.101s 00:10:47.675 sys 0m0.043s 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.675 13:10:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:10:47.675 ************************************ 00:10:47.675 END TEST bdev_qos_ro_bw 00:10:47.675 ************************************ 00:10:47.675 13:10:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:10:47.675 13:10:57 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.675 13:10:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:48.254 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.254 13:10:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:10:48.254 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.254 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:48.254 00:10:48.254 Latency(us) 00:10:48.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.254 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:48.254 Malloc_0 : 26.78 23224.34 90.72 0.00 0.00 10916.96 1835.01 503316.48 00:10:48.254 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:48.254 Null_1 : 26.92 22053.08 86.14 0.00 0.00 11577.82 727.45 150156.08 00:10:48.254 =================================================================================================================== 00:10:48.254 Total : 45277.42 176.86 0.00 0.00 11239.74 727.45 503316.48 00:10:48.254 0 00:10:48.254 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.254 13:10:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 813368 00:10:48.254 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # '[' -z 813368 ']' 00:10:48.254 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # kill -0 813368 00:10:48.254 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # uname 00:10:48.254 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.254 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 813368 00:10:48.254 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:48.255 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:48.255 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 813368' 00:10:48.255 killing process with pid 813368 00:10:48.255 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@969 -- # kill 813368 00:10:48.255 Received shutdown signal, test time was about 26.989171 seconds 00:10:48.255 00:10:48.255 Latency(us) 00:10:48.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.255 =================================================================================================================== 00:10:48.255 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:48.255 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@974 -- # wait 813368 00:10:48.513 13:10:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:10:48.513 00:10:48.513 real 0m28.421s 00:10:48.513 user 0m29.179s 00:10:48.513 sys 0m0.814s 00:10:48.513 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.513 13:10:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:48.513 ************************************ 00:10:48.513 END TEST bdev_qos 00:10:48.513 ************************************ 00:10:48.513 13:10:58 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:10:48.514 13:10:58 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.514 13:10:58 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.514 13:10:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:48.514 ************************************ 00:10:48.514 START TEST bdev_qd_sampling 00:10:48.514 ************************************ 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # qd_sampling_test_suite '' 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=818221 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 818221' 00:10:48.514 Process bdev QD sampling period testing pid: 818221 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 818221 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # '[' -z 818221 ']' 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.514 13:10:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:48.772 [2024-07-25 13:10:59.039114] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:10:48.773 [2024-07-25 13:10:59.039182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818221 ] 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:48.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.773 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:48.773 [2024-07-25 13:10:59.173165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.773 [2024-07-25 13:10:59.259638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.773 [2024-07-25 13:10:59.259644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # return 0 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:49.710 Malloc_QD 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_QD 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # local i 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.710 13:10:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:49.710 [ 00:10:49.710 { 00:10:49.710 "name": "Malloc_QD", 00:10:49.710 "aliases": [ 00:10:49.710 "8a0a9467-8200-46c4-a10d-545657db5974" 00:10:49.710 ], 00:10:49.710 "product_name": "Malloc disk", 00:10:49.710 "block_size": 512, 00:10:49.710 "num_blocks": 262144, 00:10:49.710 "uuid": "8a0a9467-8200-46c4-a10d-545657db5974", 00:10:49.710 "assigned_rate_limits": { 00:10:49.711 "rw_ios_per_sec": 0, 00:10:49.711 "rw_mbytes_per_sec": 0, 00:10:49.711 "r_mbytes_per_sec": 0, 00:10:49.711 "w_mbytes_per_sec": 0 00:10:49.711 }, 00:10:49.711 "claimed": false, 00:10:49.711 "zoned": false, 00:10:49.711 "supported_io_types": { 00:10:49.711 "read": true, 00:10:49.711 "write": true, 00:10:49.711 "unmap": true, 00:10:49.711 "flush": true, 00:10:49.711 "reset": true, 00:10:49.711 "nvme_admin": false, 00:10:49.711 "nvme_io": false, 00:10:49.711 "nvme_io_md": false, 00:10:49.711 "write_zeroes": true, 00:10:49.711 "zcopy": true, 00:10:49.711 "get_zone_info": false, 00:10:49.711 "zone_management": false, 00:10:49.711 "zone_append": false, 00:10:49.711 "compare": false, 00:10:49.711 "compare_and_write": false, 00:10:49.711 "abort": true, 00:10:49.711 "seek_hole": false, 00:10:49.711 "seek_data": false, 00:10:49.711 "copy": true, 00:10:49.711 "nvme_iov_md": false 00:10:49.711 }, 00:10:49.711 "memory_domains": [ 00:10:49.711 { 00:10:49.711 "dma_device_id": "system", 00:10:49.711 "dma_device_type": 1 00:10:49.711 }, 00:10:49.711 { 00:10:49.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.711 "dma_device_type": 2 00:10:49.711 } 00:10:49.711 ], 00:10:49.711 "driver_specific": {} 00:10:49.711 } 00:10:49.711 ] 00:10:49.711 13:11:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.711 13:11:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@907 -- # return 0 00:10:49.711 13:11:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:10:49.711 13:11:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:49.711 Running I/O for 5 seconds... 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:10:51.615 "tick_rate": 2500000000, 00:10:51.615 "ticks": 14263984434577230, 00:10:51.615 "bdevs": [ 00:10:51.615 { 00:10:51.615 "name": "Malloc_QD", 00:10:51.615 "bytes_read": 804303360, 00:10:51.615 "num_read_ops": 196356, 00:10:51.615 "bytes_written": 0, 00:10:51.615 "num_write_ops": 0, 00:10:51.615 "bytes_unmapped": 0, 00:10:51.615 "num_unmap_ops": 0, 00:10:51.615 "bytes_copied": 0, 00:10:51.615 "num_copy_ops": 0, 00:10:51.615 "read_latency_ticks": 2450582082330, 00:10:51.615 "max_read_latency_ticks": 15388652, 00:10:51.615 "min_read_latency_ticks": 286164, 00:10:51.615 "write_latency_ticks": 0, 00:10:51.615 "max_write_latency_ticks": 0, 00:10:51.615 "min_write_latency_ticks": 0, 00:10:51.615 "unmap_latency_ticks": 0, 00:10:51.615 "max_unmap_latency_ticks": 0, 00:10:51.615 "min_unmap_latency_ticks": 0, 00:10:51.615 "copy_latency_ticks": 0, 00:10:51.615 "max_copy_latency_ticks": 0, 00:10:51.615 "min_copy_latency_ticks": 0, 00:10:51.615 "io_error": {}, 00:10:51.615 "queue_depth_polling_period": 10, 00:10:51.615 "queue_depth": 512, 00:10:51.615 "io_time": 30, 00:10:51.615 "weighted_io_time": 15360 00:10:51.615 } 00:10:51.615 ] 00:10:51.615 }' 00:10:51.615 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:10:51.882 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:10:51.882 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:10:51.882 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:10:51.882 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:10:51.882 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.882 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:51.882 00:10:51.882 Latency(us) 00:10:51.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.883 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:10:51.883 Malloc_QD : 2.00 50852.97 198.64 0.00 0.00 5022.01 1304.17 5321.52 00:10:51.883 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:51.883 Malloc_QD : 2.00 51578.79 201.48 0.00 0.00 4951.61 891.29 6160.38 00:10:51.883 =================================================================================================================== 00:10:51.883 Total : 102431.76 400.12 0.00 0.00 4986.55 891.29 6160.38 00:10:51.883 0 00:10:51.883 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.883 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 818221 00:10:51.883 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # '[' -z 818221 ']' 00:10:51.883 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # kill -0 818221 00:10:51.883 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # uname 00:10:51.883 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.883 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 818221 00:10:51.883 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.883 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.883 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # echo 'killing process with pid 818221' 00:10:51.883 killing process with pid 818221 00:10:51.883 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@969 -- # kill 818221 00:10:51.883 Received shutdown signal, test time was about 2.082486 seconds 00:10:51.883 00:10:51.883 Latency(us) 00:10:51.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.883 =================================================================================================================== 00:10:51.883 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:51.883 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@974 -- # wait 818221 00:10:52.149 13:11:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:10:52.149 00:10:52.149 real 0m3.412s 00:10:52.149 user 0m6.703s 00:10:52.149 sys 0m0.423s 00:10:52.149 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.149 13:11:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:52.149 ************************************ 00:10:52.149 END TEST bdev_qd_sampling 00:10:52.149 ************************************ 00:10:52.149 13:11:02 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:10:52.149 13:11:02 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:52.149 13:11:02 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.149 13:11:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:52.149 ************************************ 00:10:52.149 START TEST bdev_error 00:10:52.149 ************************************ 00:10:52.149 13:11:02 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # error_test_suite '' 00:10:52.149 13:11:02 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:10:52.149 13:11:02 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:10:52.149 13:11:02 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:10:52.149 13:11:02 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=818882 00:10:52.149 13:11:02 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 818882' 00:10:52.149 Process error testing pid: 818882 00:10:52.149 13:11:02 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:10:52.149 13:11:02 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 818882 00:10:52.149 13:11:02 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 818882 ']' 00:10:52.149 13:11:02 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.149 13:11:02 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:52.149 13:11:02 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.149 13:11:02 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:52.149 13:11:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:52.149 [2024-07-25 13:11:02.537807] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:10:52.149 [2024-07-25 13:11:02.537864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818882 ] 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:52.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.149 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:52.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.150 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:52.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.150 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:52.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.150 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:52.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.150 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:52.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.150 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:52.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.150 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:52.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.150 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:52.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.150 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:52.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.150 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:52.408 [2024-07-25 13:11:02.659184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.408 [2024-07-25 13:11:02.741065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.977 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.977 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:10:52.977 13:11:03 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:10:52.977 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.977 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:53.243 Dev_1 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.243 13:11:03 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:53.243 [ 00:10:53.243 { 00:10:53.243 "name": "Dev_1", 00:10:53.243 "aliases": [ 00:10:53.243 "ab2e0be2-5cdf-4907-8383-3b25e1704c7c" 00:10:53.243 ], 00:10:53.243 "product_name": "Malloc disk", 00:10:53.243 "block_size": 512, 00:10:53.243 "num_blocks": 262144, 00:10:53.243 "uuid": "ab2e0be2-5cdf-4907-8383-3b25e1704c7c", 00:10:53.243 "assigned_rate_limits": { 00:10:53.243 "rw_ios_per_sec": 0, 00:10:53.243 "rw_mbytes_per_sec": 0, 00:10:53.243 "r_mbytes_per_sec": 0, 00:10:53.243 "w_mbytes_per_sec": 0 00:10:53.243 }, 00:10:53.243 "claimed": false, 00:10:53.243 "zoned": false, 00:10:53.243 "supported_io_types": { 00:10:53.243 "read": true, 00:10:53.243 "write": true, 00:10:53.243 "unmap": true, 00:10:53.243 "flush": true, 00:10:53.243 "reset": true, 00:10:53.243 "nvme_admin": false, 00:10:53.243 "nvme_io": false, 00:10:53.243 "nvme_io_md": false, 00:10:53.243 "write_zeroes": true, 00:10:53.243 "zcopy": true, 00:10:53.243 "get_zone_info": false, 00:10:53.243 "zone_management": false, 00:10:53.243 "zone_append": false, 00:10:53.243 "compare": false, 00:10:53.243 "compare_and_write": false, 00:10:53.243 "abort": true, 00:10:53.243 "seek_hole": false, 00:10:53.243 "seek_data": false, 00:10:53.243 "copy": true, 00:10:53.243 "nvme_iov_md": false 00:10:53.243 }, 00:10:53.243 "memory_domains": [ 00:10:53.243 { 00:10:53.243 "dma_device_id": "system", 00:10:53.243 "dma_device_type": 1 00:10:53.243 }, 00:10:53.243 { 00:10:53.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.243 "dma_device_type": 2 00:10:53.243 } 00:10:53.243 ], 00:10:53.243 "driver_specific": {} 00:10:53.243 } 00:10:53.243 ] 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:10:53.243 13:11:03 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:53.243 true 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.243 13:11:03 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:53.243 Dev_2 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.243 13:11:03 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:53.243 [ 00:10:53.243 { 00:10:53.243 "name": "Dev_2", 00:10:53.243 "aliases": [ 00:10:53.243 "83b4a231-db1c-466b-8109-77388ded46a1" 00:10:53.243 ], 00:10:53.243 "product_name": "Malloc disk", 00:10:53.243 "block_size": 512, 00:10:53.243 "num_blocks": 262144, 00:10:53.243 "uuid": "83b4a231-db1c-466b-8109-77388ded46a1", 00:10:53.243 "assigned_rate_limits": { 00:10:53.243 "rw_ios_per_sec": 0, 00:10:53.243 "rw_mbytes_per_sec": 0, 00:10:53.243 "r_mbytes_per_sec": 0, 00:10:53.243 "w_mbytes_per_sec": 0 00:10:53.243 }, 00:10:53.243 "claimed": false, 00:10:53.243 "zoned": false, 00:10:53.243 "supported_io_types": { 00:10:53.243 "read": true, 00:10:53.243 "write": true, 00:10:53.243 "unmap": true, 00:10:53.243 "flush": true, 00:10:53.243 "reset": true, 00:10:53.243 "nvme_admin": false, 00:10:53.243 "nvme_io": false, 00:10:53.243 "nvme_io_md": false, 00:10:53.243 "write_zeroes": true, 00:10:53.243 "zcopy": true, 00:10:53.243 "get_zone_info": false, 00:10:53.243 "zone_management": false, 00:10:53.243 "zone_append": false, 00:10:53.243 "compare": false, 00:10:53.243 "compare_and_write": false, 00:10:53.243 "abort": true, 00:10:53.243 "seek_hole": false, 00:10:53.243 "seek_data": false, 00:10:53.243 "copy": true, 00:10:53.243 "nvme_iov_md": false 00:10:53.243 }, 00:10:53.243 "memory_domains": [ 00:10:53.243 { 00:10:53.243 "dma_device_id": "system", 00:10:53.243 "dma_device_type": 1 00:10:53.243 }, 00:10:53.243 { 00:10:53.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.243 "dma_device_type": 2 00:10:53.243 } 00:10:53.243 ], 00:10:53.243 "driver_specific": {} 00:10:53.243 } 00:10:53.243 ] 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:10:53.243 13:11:03 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:53.243 13:11:03 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.243 13:11:03 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:10:53.243 13:11:03 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:10:53.243 Running I/O for 5 seconds... 00:10:54.219 13:11:04 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 818882 00:10:54.219 13:11:04 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 818882' 00:10:54.219 Process is existed as continue on error is set. Pid: 818882 00:10:54.219 13:11:04 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:10:54.219 13:11:04 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.219 13:11:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:54.219 13:11:04 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.219 13:11:04 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:10:54.219 13:11:04 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.219 13:11:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:54.219 13:11:04 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.219 13:11:04 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:10:54.219 Timeout while waiting for response: 00:10:54.219 00:10:54.219 00:10:58.411 00:10:58.411 Latency(us) 00:10:58.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.412 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:10:58.412 EE_Dev_1 : 0.91 40758.39 159.21 5.51 0.00 389.24 120.42 638.98 00:10:58.412 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:10:58.412 Dev_2 : 5.00 88360.77 345.16 0.00 0.00 177.87 61.44 18769.51 00:10:58.412 =================================================================================================================== 00:10:58.412 Total : 129119.17 504.37 5.51 0.00 194.19 61.44 18769.51 00:10:59.350 13:11:09 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 818882 00:10:59.350 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # '[' -z 818882 ']' 00:10:59.350 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # kill -0 818882 00:10:59.350 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # uname 00:10:59.350 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.350 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 818882 00:10:59.350 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:59.350 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:59.350 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 818882' 00:10:59.350 killing process with pid 818882 00:10:59.350 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@969 -- # kill 818882 00:10:59.350 Received shutdown signal, test time was about 5.000000 seconds 00:10:59.350 00:10:59.350 Latency(us) 00:10:59.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.350 =================================================================================================================== 00:10:59.350 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:59.350 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@974 -- # wait 818882 00:10:59.610 13:11:09 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=820117 00:10:59.610 13:11:09 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 820117' 00:10:59.610 Process error testing pid: 820117 00:10:59.610 13:11:09 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:10:59.610 13:11:09 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 820117 00:10:59.610 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 820117 ']' 00:10:59.610 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.610 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.610 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.610 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.610 13:11:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:59.610 [2024-07-25 13:11:09.989367] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:10:59.610 [2024-07-25 13:11:09.989426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid820117 ] 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:59.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.610 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:59.869 [2024-07-25 13:11:10.111490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.869 [2024-07-25 13:11:10.194804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:11:00.437 13:11:10 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:00.437 Dev_1 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.437 13:11:10 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.437 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:00.696 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.696 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:11:00.696 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.696 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:00.696 [ 00:11:00.696 { 00:11:00.696 "name": "Dev_1", 00:11:00.696 "aliases": [ 00:11:00.696 "34029a78-6ee1-49eb-9c2d-67ce668ae92d" 00:11:00.696 ], 00:11:00.696 "product_name": "Malloc disk", 00:11:00.697 "block_size": 512, 00:11:00.697 "num_blocks": 262144, 00:11:00.697 "uuid": "34029a78-6ee1-49eb-9c2d-67ce668ae92d", 00:11:00.697 "assigned_rate_limits": { 00:11:00.697 "rw_ios_per_sec": 0, 00:11:00.697 "rw_mbytes_per_sec": 0, 00:11:00.697 "r_mbytes_per_sec": 0, 00:11:00.697 "w_mbytes_per_sec": 0 00:11:00.697 }, 00:11:00.697 "claimed": false, 00:11:00.697 "zoned": false, 00:11:00.697 "supported_io_types": { 00:11:00.697 "read": true, 00:11:00.697 "write": true, 00:11:00.697 "unmap": true, 00:11:00.697 "flush": true, 00:11:00.697 "reset": true, 00:11:00.697 "nvme_admin": false, 00:11:00.697 "nvme_io": false, 00:11:00.697 "nvme_io_md": false, 00:11:00.697 "write_zeroes": true, 00:11:00.697 "zcopy": true, 00:11:00.697 "get_zone_info": false, 00:11:00.697 "zone_management": false, 00:11:00.697 "zone_append": false, 00:11:00.697 "compare": false, 00:11:00.697 "compare_and_write": false, 00:11:00.697 "abort": true, 00:11:00.697 "seek_hole": false, 00:11:00.697 "seek_data": false, 00:11:00.697 "copy": true, 00:11:00.697 "nvme_iov_md": false 00:11:00.697 }, 00:11:00.697 "memory_domains": [ 00:11:00.697 { 00:11:00.697 "dma_device_id": "system", 00:11:00.697 "dma_device_type": 1 00:11:00.697 }, 00:11:00.697 { 00:11:00.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.697 "dma_device_type": 2 00:11:00.697 } 00:11:00.697 ], 00:11:00.697 "driver_specific": {} 00:11:00.697 } 00:11:00.697 ] 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:11:00.697 13:11:10 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:00.697 true 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.697 13:11:10 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:00.697 Dev_2 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.697 13:11:10 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.697 13:11:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:00.697 [ 00:11:00.697 { 00:11:00.697 "name": "Dev_2", 00:11:00.697 "aliases": [ 00:11:00.697 "85fe59eb-ea34-42b8-8d65-478b37fbf624" 00:11:00.697 ], 00:11:00.697 "product_name": "Malloc disk", 00:11:00.697 "block_size": 512, 00:11:00.697 "num_blocks": 262144, 00:11:00.697 "uuid": "85fe59eb-ea34-42b8-8d65-478b37fbf624", 00:11:00.697 "assigned_rate_limits": { 00:11:00.697 "rw_ios_per_sec": 0, 00:11:00.697 "rw_mbytes_per_sec": 0, 00:11:00.697 "r_mbytes_per_sec": 0, 00:11:00.697 "w_mbytes_per_sec": 0 00:11:00.697 }, 00:11:00.697 "claimed": false, 00:11:00.697 "zoned": false, 00:11:00.697 "supported_io_types": { 00:11:00.697 "read": true, 00:11:00.697 "write": true, 00:11:00.697 "unmap": true, 00:11:00.697 "flush": true, 00:11:00.697 "reset": true, 00:11:00.697 "nvme_admin": false, 00:11:00.697 "nvme_io": false, 00:11:00.697 "nvme_io_md": false, 00:11:00.697 "write_zeroes": true, 00:11:00.697 "zcopy": true, 00:11:00.697 "get_zone_info": false, 00:11:00.697 "zone_management": false, 00:11:00.697 "zone_append": false, 00:11:00.697 "compare": false, 00:11:00.697 "compare_and_write": false, 00:11:00.697 "abort": true, 00:11:00.697 "seek_hole": false, 00:11:00.697 "seek_data": false, 00:11:00.697 "copy": true, 00:11:00.697 "nvme_iov_md": false 00:11:00.697 }, 00:11:00.697 "memory_domains": [ 00:11:00.697 { 00:11:00.697 "dma_device_id": "system", 00:11:00.697 "dma_device_type": 1 00:11:00.697 }, 00:11:00.697 { 00:11:00.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.697 "dma_device_type": 2 00:11:00.697 } 00:11:00.697 ], 00:11:00.697 "driver_specific": {} 00:11:00.697 } 00:11:00.697 ] 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:11:00.697 13:11:11 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.697 13:11:11 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 820117 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # local es=0 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # valid_exec_arg wait 820117 00:11:00.697 13:11:11 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@638 -- # local arg=wait 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # type -t wait 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.697 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # wait 820117 00:11:00.697 Running I/O for 5 seconds... 00:11:00.697 task offset: 220048 on job bdev=EE_Dev_1 fails 00:11:00.697 00:11:00.697 Latency(us) 00:11:00.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.698 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:00.698 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:11:00.698 EE_Dev_1 : 0.00 31518.62 123.12 7163.32 0.00 346.02 126.16 616.04 00:11:00.698 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:00.698 Dev_2 : 0.00 19536.02 76.31 0.00 0.00 611.76 115.51 1140.33 00:11:00.698 =================================================================================================================== 00:11:00.698 Total : 51054.64 199.43 7163.32 0.00 490.15 115.51 1140.33 00:11:00.698 [2024-07-25 13:11:11.151200] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:00.698 request: 00:11:00.698 { 00:11:00.698 "method": "perform_tests", 00:11:00.698 "req_id": 1 00:11:00.698 } 00:11:00.698 Got JSON-RPC error response 00:11:00.698 response: 00:11:00.698 { 00:11:00.698 "code": -32603, 00:11:00.698 "message": "bdevperf failed with error Operation not permitted" 00:11:00.698 } 00:11:00.957 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # es=255 00:11:00.957 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:00.957 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # es=127 00:11:00.957 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # case "$es" in 00:11:00.957 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@670 -- # es=1 00:11:00.957 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:00.957 00:11:00.957 real 0m8.920s 00:11:00.957 user 0m9.293s 00:11:00.957 sys 0m0.838s 00:11:00.957 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.957 13:11:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:00.957 ************************************ 00:11:00.957 END TEST bdev_error 00:11:00.957 ************************************ 00:11:00.957 13:11:11 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:11:00.957 13:11:11 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:01.218 13:11:11 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.218 13:11:11 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:01.218 ************************************ 00:11:01.218 START TEST bdev_stat 00:11:01.218 ************************************ 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # stat_test_suite '' 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=820410 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 820410' 00:11:01.218 Process Bdev IO statistics testing pid: 820410 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 820410 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # '[' -z 820410 ']' 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.218 13:11:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:01.218 [2024-07-25 13:11:11.544280] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:11:01.218 [2024-07-25 13:11:11.544342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid820410 ] 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:01.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:01.218 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:01.218 [2024-07-25 13:11:11.676580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:01.478 [2024-07-25 13:11:11.765660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.478 [2024-07-25 13:11:11.765666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # return 0 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:02.048 Malloc_STAT 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_STAT 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # local i 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:02.048 [ 00:11:02.048 { 00:11:02.048 "name": "Malloc_STAT", 00:11:02.048 "aliases": [ 00:11:02.048 "ddb889d5-7ea0-433d-9253-08d5efce106e" 00:11:02.048 ], 00:11:02.048 "product_name": "Malloc disk", 00:11:02.048 "block_size": 512, 00:11:02.048 "num_blocks": 262144, 00:11:02.048 "uuid": "ddb889d5-7ea0-433d-9253-08d5efce106e", 00:11:02.048 "assigned_rate_limits": { 00:11:02.048 "rw_ios_per_sec": 0, 00:11:02.048 "rw_mbytes_per_sec": 0, 00:11:02.048 "r_mbytes_per_sec": 0, 00:11:02.048 "w_mbytes_per_sec": 0 00:11:02.048 }, 00:11:02.048 "claimed": false, 00:11:02.048 "zoned": false, 00:11:02.048 "supported_io_types": { 00:11:02.048 "read": true, 00:11:02.048 "write": true, 00:11:02.048 "unmap": true, 00:11:02.048 "flush": true, 00:11:02.048 "reset": true, 00:11:02.048 "nvme_admin": false, 00:11:02.048 "nvme_io": false, 00:11:02.048 "nvme_io_md": false, 00:11:02.048 "write_zeroes": true, 00:11:02.048 "zcopy": true, 00:11:02.048 "get_zone_info": false, 00:11:02.048 "zone_management": false, 00:11:02.048 "zone_append": false, 00:11:02.048 "compare": false, 00:11:02.048 "compare_and_write": false, 00:11:02.048 "abort": true, 00:11:02.048 "seek_hole": false, 00:11:02.048 "seek_data": false, 00:11:02.048 "copy": true, 00:11:02.048 "nvme_iov_md": false 00:11:02.048 }, 00:11:02.048 "memory_domains": [ 00:11:02.048 { 00:11:02.048 "dma_device_id": "system", 00:11:02.048 "dma_device_type": 1 00:11:02.048 }, 00:11:02.048 { 00:11:02.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.048 "dma_device_type": 2 00:11:02.048 } 00:11:02.048 ], 00:11:02.048 "driver_specific": {} 00:11:02.048 } 00:11:02.048 ] 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- common/autotest_common.sh@907 -- # return 0 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:11:02.048 13:11:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:02.307 Running I/O for 10 seconds... 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:11:04.215 "tick_rate": 2500000000, 00:11:04.215 "ticks": 14264015601468316, 00:11:04.215 "bdevs": [ 00:11:04.215 { 00:11:04.215 "name": "Malloc_STAT", 00:11:04.215 "bytes_read": 801157632, 00:11:04.215 "num_read_ops": 195588, 00:11:04.215 "bytes_written": 0, 00:11:04.215 "num_write_ops": 0, 00:11:04.215 "bytes_unmapped": 0, 00:11:04.215 "num_unmap_ops": 0, 00:11:04.215 "bytes_copied": 0, 00:11:04.215 "num_copy_ops": 0, 00:11:04.215 "read_latency_ticks": 2430603566562, 00:11:04.215 "max_read_latency_ticks": 14835796, 00:11:04.215 "min_read_latency_ticks": 261814, 00:11:04.215 "write_latency_ticks": 0, 00:11:04.215 "max_write_latency_ticks": 0, 00:11:04.215 "min_write_latency_ticks": 0, 00:11:04.215 "unmap_latency_ticks": 0, 00:11:04.215 "max_unmap_latency_ticks": 0, 00:11:04.215 "min_unmap_latency_ticks": 0, 00:11:04.215 "copy_latency_ticks": 0, 00:11:04.215 "max_copy_latency_ticks": 0, 00:11:04.215 "min_copy_latency_ticks": 0, 00:11:04.215 "io_error": {} 00:11:04.215 } 00:11:04.215 ] 00:11:04.215 }' 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=195588 00:11:04.215 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:11:04.216 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.216 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:04.216 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.216 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:11:04.216 "tick_rate": 2500000000, 00:11:04.216 "ticks": 14264015777258424, 00:11:04.216 "name": "Malloc_STAT", 00:11:04.216 "channels": [ 00:11:04.216 { 00:11:04.216 "thread_id": 2, 00:11:04.216 "bytes_read": 413138944, 00:11:04.216 "num_read_ops": 100864, 00:11:04.216 "bytes_written": 0, 00:11:04.216 "num_write_ops": 0, 00:11:04.216 "bytes_unmapped": 0, 00:11:04.216 "num_unmap_ops": 0, 00:11:04.216 "bytes_copied": 0, 00:11:04.216 "num_copy_ops": 0, 00:11:04.216 "read_latency_ticks": 1259840032312, 00:11:04.216 "max_read_latency_ticks": 13311230, 00:11:04.216 "min_read_latency_ticks": 8298584, 00:11:04.216 "write_latency_ticks": 0, 00:11:04.216 "max_write_latency_ticks": 0, 00:11:04.216 "min_write_latency_ticks": 0, 00:11:04.216 "unmap_latency_ticks": 0, 00:11:04.216 "max_unmap_latency_ticks": 0, 00:11:04.216 "min_unmap_latency_ticks": 0, 00:11:04.216 "copy_latency_ticks": 0, 00:11:04.216 "max_copy_latency_ticks": 0, 00:11:04.216 "min_copy_latency_ticks": 0 00:11:04.216 }, 00:11:04.216 { 00:11:04.216 "thread_id": 3, 00:11:04.216 "bytes_read": 418381824, 00:11:04.216 "num_read_ops": 102144, 00:11:04.216 "bytes_written": 0, 00:11:04.216 "num_write_ops": 0, 00:11:04.216 "bytes_unmapped": 0, 00:11:04.216 "num_unmap_ops": 0, 00:11:04.216 "bytes_copied": 0, 00:11:04.216 "num_copy_ops": 0, 00:11:04.216 "read_latency_ticks": 1262978247970, 00:11:04.216 "max_read_latency_ticks": 14835796, 00:11:04.216 "min_read_latency_ticks": 8288770, 00:11:04.216 "write_latency_ticks": 0, 00:11:04.216 "max_write_latency_ticks": 0, 00:11:04.216 "min_write_latency_ticks": 0, 00:11:04.216 "unmap_latency_ticks": 0, 00:11:04.216 "max_unmap_latency_ticks": 0, 00:11:04.216 "min_unmap_latency_ticks": 0, 00:11:04.216 "copy_latency_ticks": 0, 00:11:04.216 "max_copy_latency_ticks": 0, 00:11:04.216 "min_copy_latency_ticks": 0 00:11:04.216 } 00:11:04.216 ] 00:11:04.216 }' 00:11:04.216 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:11:04.216 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=100864 00:11:04.216 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=100864 00:11:04.216 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=102144 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=203008 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:11:04.476 "tick_rate": 2500000000, 00:11:04.476 "ticks": 14264016042479988, 00:11:04.476 "bdevs": [ 00:11:04.476 { 00:11:04.476 "name": "Malloc_STAT", 00:11:04.476 "bytes_read": 875606528, 00:11:04.476 "num_read_ops": 213764, 00:11:04.476 "bytes_written": 0, 00:11:04.476 "num_write_ops": 0, 00:11:04.476 "bytes_unmapped": 0, 00:11:04.476 "num_unmap_ops": 0, 00:11:04.476 "bytes_copied": 0, 00:11:04.476 "num_copy_ops": 0, 00:11:04.476 "read_latency_ticks": 2657119027538, 00:11:04.476 "max_read_latency_ticks": 14835796, 00:11:04.476 "min_read_latency_ticks": 261814, 00:11:04.476 "write_latency_ticks": 0, 00:11:04.476 "max_write_latency_ticks": 0, 00:11:04.476 "min_write_latency_ticks": 0, 00:11:04.476 "unmap_latency_ticks": 0, 00:11:04.476 "max_unmap_latency_ticks": 0, 00:11:04.476 "min_unmap_latency_ticks": 0, 00:11:04.476 "copy_latency_ticks": 0, 00:11:04.476 "max_copy_latency_ticks": 0, 00:11:04.476 "min_copy_latency_ticks": 0, 00:11:04.476 "io_error": {} 00:11:04.476 } 00:11:04.476 ] 00:11:04.476 }' 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=213764 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 203008 -lt 195588 ']' 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 203008 -gt 213764 ']' 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:04.476 00:11:04.476 Latency(us) 00:11:04.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.476 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:11:04.476 Malloc_STAT : 2.15 51101.08 199.61 0.00 0.00 4997.43 1448.35 5373.95 00:11:04.476 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:11:04.476 Malloc_STAT : 2.15 51682.25 201.88 0.00 0.00 4941.91 1159.99 5950.67 00:11:04.476 =================================================================================================================== 00:11:04.476 Total : 102783.33 401.50 0.00 0.00 4969.51 1159.99 5950.67 00:11:04.476 0 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 820410 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # '[' -z 820410 ']' 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # kill -0 820410 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # uname 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 820410 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 820410' 00:11:04.476 killing process with pid 820410 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@969 -- # kill 820410 00:11:04.476 Received shutdown signal, test time was about 2.239404 seconds 00:11:04.476 00:11:04.476 Latency(us) 00:11:04.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.476 =================================================================================================================== 00:11:04.476 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:04.476 13:11:14 blockdev_general.bdev_stat -- common/autotest_common.sh@974 -- # wait 820410 00:11:04.736 13:11:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:11:04.736 00:11:04.736 real 0m3.578s 00:11:04.736 user 0m7.147s 00:11:04.736 sys 0m0.471s 00:11:04.736 13:11:15 blockdev_general.bdev_stat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.736 13:11:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:04.736 ************************************ 00:11:04.736 END TEST bdev_stat 00:11:04.736 ************************************ 00:11:04.736 13:11:15 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:11:04.736 13:11:15 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:11:04.736 13:11:15 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:11:04.736 13:11:15 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:11:04.736 13:11:15 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:11:04.736 13:11:15 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:11:04.736 13:11:15 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:11:04.736 13:11:15 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:11:04.736 13:11:15 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:11:04.736 13:11:15 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:11:04.736 00:11:04.736 real 1m54.503s 00:11:04.736 user 7m25.774s 00:11:04.736 sys 0m21.674s 00:11:04.736 13:11:15 blockdev_general -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.736 13:11:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:04.736 ************************************ 00:11:04.736 END TEST blockdev_general 00:11:04.736 ************************************ 00:11:04.736 13:11:15 -- spdk/autotest.sh@194 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:11:04.736 13:11:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:04.736 13:11:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.736 13:11:15 -- common/autotest_common.sh@10 -- # set +x 00:11:04.736 ************************************ 00:11:04.736 START TEST bdev_raid 00:11:04.736 ************************************ 00:11:04.736 13:11:15 bdev_raid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:11:04.996 * Looking for test storage... 00:11:04.996 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:11:04.996 13:11:15 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:11:04.996 13:11:15 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:11:04.996 13:11:15 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:11:04.996 13:11:15 bdev_raid -- bdev/bdev_raid.sh@927 -- # mkdir -p /raidtest 00:11:04.996 13:11:15 bdev_raid -- bdev/bdev_raid.sh@928 -- # trap 'cleanup; exit 1' EXIT 00:11:04.996 13:11:15 bdev_raid -- bdev/bdev_raid.sh@930 -- # base_blocklen=512 00:11:04.996 13:11:15 bdev_raid -- bdev/bdev_raid.sh@932 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:11:04.996 13:11:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:04.996 13:11:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.996 13:11:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.996 ************************************ 00:11:04.996 START TEST raid0_resize_superblock_test 00:11:04.996 ************************************ 00:11:04.996 13:11:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:11:04.996 13:11:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=0 00:11:04.996 13:11:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=821273 00:11:04.996 13:11:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 821273' 00:11:04.996 Process raid pid: 821273 00:11:04.996 13:11:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:04.996 13:11:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 821273 /var/tmp/spdk-raid.sock 00:11:04.996 13:11:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 821273 ']' 00:11:04.996 13:11:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:04.996 13:11:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.996 13:11:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:04.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:04.996 13:11:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.996 13:11:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.996 [2024-07-25 13:11:15.426019] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:11:04.996 [2024-07-25 13:11:15.426080] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:05.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.256 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:05.256 [2024-07-25 13:11:15.558532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.256 [2024-07-25 13:11:15.644465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.256 [2024-07-25 13:11:15.712716] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.256 [2024-07-25 13:11:15.712751] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.189 13:11:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.189 13:11:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:06.189 13:11:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:11:06.189 malloc0 00:11:06.189 13:11:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:11:06.448 [2024-07-25 13:11:16.873883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:06.448 [2024-07-25 13:11:16.873928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.448 [2024-07-25 13:11:16.873947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2557c60 00:11:06.448 [2024-07-25 13:11:16.873964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.448 [2024-07-25 13:11:16.875496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.448 [2024-07-25 13:11:16.875524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:06.448 pt0 00:11:06.448 13:11:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:11:06.707 f4e9e564-aa77-48d5-97ce-ece2e8c679ad 00:11:06.966 13:11:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:11:06.966 c68cb301-1a32-4c61-9894-ab312e31ae04 00:11:06.966 13:11:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:11:07.224 557802d5-e5cd-4cb7-8a8f-8a1d81aecb62 00:11:07.224 13:11:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:11:07.224 13:11:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@884 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 0 -z 64 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:11:07.484 [2024-07-25 13:11:17.863739] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev c68cb301-1a32-4c61-9894-ab312e31ae04 is claimed 00:11:07.484 [2024-07-25 13:11:17.863817] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 557802d5-e5cd-4cb7-8a8f-8a1d81aecb62 is claimed 00:11:07.484 [2024-07-25 13:11:17.863934] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2704e00 00:11:07.484 [2024-07-25 13:11:17.863945] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:11:07.484 [2024-07-25 13:11:17.864125] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2705320 00:11:07.485 [2024-07-25 13:11:17.864280] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2704e00 00:11:07.485 [2024-07-25 13:11:17.864290] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x2704e00 00:11:07.485 [2024-07-25 13:11:17.864400] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.485 13:11:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:11:07.485 13:11:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:07.743 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:11:07.743 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:11:07.743 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:08.001 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:11:08.001 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:08.001 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:11:08.001 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:08.001 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:08.259 [2024-07-25 13:11:18.553752] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.259 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:08.259 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:08.259 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 245760 == 245760 )) 00:11:08.259 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:11:08.517 [2024-07-25 13:11:18.770261] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:08.517 [2024-07-25 13:11:18.770282] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c68cb301-1a32-4c61-9894-ab312e31ae04' was resized: old size 131072, new size 204800 00:11:08.517 13:11:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:11:08.517 [2024-07-25 13:11:18.994804] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:08.517 [2024-07-25 13:11:18.994823] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '557802d5-e5cd-4cb7-8a8f-8a1d81aecb62' was resized: old size 131072, new size 204800 00:11:08.517 [2024-07-25 13:11:18.994845] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:11:08.775 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:11:08.775 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:11:08.775 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:11:08.775 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:11:08.775 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:11:09.034 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:11:09.034 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:09.034 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # jq '.[].num_blocks' 00:11:09.034 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:09.034 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:09.292 [2024-07-25 13:11:19.676705] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.292 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:09.292 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:09.292 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # (( 393216 == 393216 )) 00:11:09.292 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:11:09.555 [2024-07-25 13:11:19.909120] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:09.555 [2024-07-25 13:11:19.909180] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:09.555 [2024-07-25 13:11:19.909189] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.555 [2024-07-25 13:11:19.909200] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:09.555 [2024-07-25 13:11:19.909271] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.555 [2024-07-25 13:11:19.909300] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.555 [2024-07-25 13:11:19.909311] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2704e00 name Raid, state offline 00:11:09.555 13:11:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:11:09.861 [2024-07-25 13:11:20.137701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:09.861 [2024-07-25 13:11:20.137750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.861 [2024-07-25 13:11:20.137775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2701e20 00:11:09.861 [2024-07-25 13:11:20.137787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.861 [2024-07-25 13:11:20.139285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.861 [2024-07-25 13:11:20.139312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:09.861 [2024-07-25 13:11:20.140423] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c68cb301-1a32-4c61-9894-ab312e31ae04 00:11:09.861 [2024-07-25 13:11:20.140456] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev c68cb301-1a32-4c61-9894-ab312e31ae04 is claimed 00:11:09.861 [2024-07-25 13:11:20.140536] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 557802d5-e5cd-4cb7-8a8f-8a1d81aecb62 00:11:09.861 [2024-07-25 13:11:20.140553] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 557802d5-e5cd-4cb7-8a8f-8a1d81aecb62 is claimed 00:11:09.861 [2024-07-25 13:11:20.140655] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 557802d5-e5cd-4cb7-8a8f-8a1d81aecb62 (2) smaller than existing raid bdev Raid (3) 00:11:09.861 [2024-07-25 13:11:20.140683] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2705610 00:11:09.861 [2024-07-25 13:11:20.140690] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:11:09.861 [2024-07-25 13:11:20.140838] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2556c70 00:11:09.861 [2024-07-25 13:11:20.140966] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2705610 00:11:09.861 [2024-07-25 13:11:20.140975] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x2705610 00:11:09.861 [2024-07-25 13:11:20.141074] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.861 pt0 00:11:09.861 13:11:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:09.861 13:11:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:09.861 13:11:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:09.861 13:11:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # jq '.[].num_blocks' 00:11:10.120 [2024-07-25 13:11:20.366516] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # (( 393216 == 393216 )) 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 821273 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 821273 ']' 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 821273 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 821273 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 821273' 00:11:10.120 killing process with pid 821273 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 821273 00:11:10.120 [2024-07-25 13:11:20.445111] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.120 [2024-07-25 13:11:20.445168] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.120 [2024-07-25 13:11:20.445207] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.120 [2024-07-25 13:11:20.445218] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2705610 name Raid, state offline 00:11:10.120 13:11:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 821273 00:11:10.120 [2024-07-25 13:11:20.524460] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.378 13:11:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:11:10.378 00:11:10.378 real 0m5.349s 00:11:10.378 user 0m8.647s 00:11:10.378 sys 0m1.158s 00:11:10.378 13:11:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.378 13:11:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.378 ************************************ 00:11:10.378 END TEST raid0_resize_superblock_test 00:11:10.378 ************************************ 00:11:10.378 13:11:20 bdev_raid -- bdev/bdev_raid.sh@933 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:11:10.378 13:11:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:10.378 13:11:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.378 13:11:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.378 ************************************ 00:11:10.378 START TEST raid1_resize_superblock_test 00:11:10.378 ************************************ 00:11:10.378 13:11:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:11:10.378 13:11:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=1 00:11:10.378 13:11:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=822135 00:11:10.378 13:11:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 822135' 00:11:10.378 Process raid pid: 822135 00:11:10.378 13:11:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:10.378 13:11:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 822135 /var/tmp/spdk-raid.sock 00:11:10.378 13:11:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 822135 ']' 00:11:10.378 13:11:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:10.378 13:11:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.378 13:11:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:10.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:10.378 13:11:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.378 13:11:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.378 [2024-07-25 13:11:20.857456] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:11:10.378 [2024-07-25 13:11:20.857512] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:10.636 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.636 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:10.636 [2024-07-25 13:11:20.990848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.636 [2024-07-25 13:11:21.076479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.894 [2024-07-25 13:11:21.131096] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.894 [2024-07-25 13:11:21.131136] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.459 13:11:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.459 13:11:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:11.459 13:11:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:11:11.717 malloc0 00:11:11.717 13:11:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:11:11.974 [2024-07-25 13:11:22.290291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:11.974 [2024-07-25 13:11:22.290334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.974 [2024-07-25 13:11:22.290357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb3bc60 00:11:11.974 [2024-07-25 13:11:22.290369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.974 [2024-07-25 13:11:22.291862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.974 [2024-07-25 13:11:22.291890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:11.974 pt0 00:11:11.974 13:11:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:11:12.232 8a0d060c-78e8-4e8b-becf-4a3972b89d77 00:11:12.232 13:11:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:11:12.490 7696195f-0b6b-4c8e-abc7-34a1c78d4759 00:11:12.490 13:11:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:11:12.750 55443e18-2344-49e5-b9fa-1e267baf0d03 00:11:12.750 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:11:12.750 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 1 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:11:13.008 [2024-07-25 13:11:23.278603] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7696195f-0b6b-4c8e-abc7-34a1c78d4759 is claimed 00:11:13.008 [2024-07-25 13:11:23.278677] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 55443e18-2344-49e5-b9fa-1e267baf0d03 is claimed 00:11:13.008 [2024-07-25 13:11:23.278798] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xce8e00 00:11:13.008 [2024-07-25 13:11:23.278809] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:11:13.008 [2024-07-25 13:11:23.278986] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xce9320 00:11:13.008 [2024-07-25 13:11:23.279132] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xce8e00 00:11:13.008 [2024-07-25 13:11:23.279149] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0xce8e00 00:11:13.008 [2024-07-25 13:11:23.279260] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.008 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:11:13.008 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:13.267 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:11:13.267 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:11:13.267 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:13.526 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:11:13.526 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:13.526 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:13.526 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:13.526 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:11:13.526 [2024-07-25 13:11:23.964591] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.526 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:13.526 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:13.526 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 122880 == 122880 )) 00:11:13.527 13:11:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:11:13.785 [2024-07-25 13:11:24.177077] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:13.785 [2024-07-25 13:11:24.177098] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7696195f-0b6b-4c8e-abc7-34a1c78d4759' was resized: old size 131072, new size 204800 00:11:13.785 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:11:14.046 [2024-07-25 13:11:24.401626] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:14.046 [2024-07-25 13:11:24.401644] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '55443e18-2344-49e5-b9fa-1e267baf0d03' was resized: old size 131072, new size 204800 00:11:14.046 [2024-07-25 13:11:24.401665] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:11:14.046 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:11:14.046 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:11:14.304 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:11:14.304 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:11:14.304 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:11:14.562 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:11:14.562 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:14.562 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:14.562 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:14.562 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # jq '.[].num_blocks' 00:11:14.821 [2024-07-25 13:11:25.075497] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.821 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:14.821 13:11:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:14.821 13:11:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # (( 196608 == 196608 )) 00:11:14.822 13:11:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:11:14.822 [2024-07-25 13:11:25.299895] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:14.822 [2024-07-25 13:11:25.299945] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:14.822 [2024-07-25 13:11:25.299966] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:14.822 [2024-07-25 13:11:25.300071] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.822 [2024-07-25 13:11:25.300209] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.822 [2024-07-25 13:11:25.300264] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.822 [2024-07-25 13:11:25.300276] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xce8e00 name Raid, state offline 00:11:15.081 13:11:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:11:15.081 [2024-07-25 13:11:25.524457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:15.081 [2024-07-25 13:11:25.524494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.081 [2024-07-25 13:11:25.524511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xce5e20 00:11:15.081 [2024-07-25 13:11:25.524523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.081 [2024-07-25 13:11:25.525989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.081 [2024-07-25 13:11:25.526016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:15.081 [2024-07-25 13:11:25.527109] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7696195f-0b6b-4c8e-abc7-34a1c78d4759 00:11:15.081 [2024-07-25 13:11:25.527152] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7696195f-0b6b-4c8e-abc7-34a1c78d4759 is claimed 00:11:15.081 [2024-07-25 13:11:25.527235] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 55443e18-2344-49e5-b9fa-1e267baf0d03 00:11:15.081 [2024-07-25 13:11:25.527252] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 55443e18-2344-49e5-b9fa-1e267baf0d03 is claimed 00:11:15.081 [2024-07-25 13:11:25.527352] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 55443e18-2344-49e5-b9fa-1e267baf0d03 (2) smaller than existing raid bdev Raid (3) 00:11:15.081 [2024-07-25 13:11:25.527381] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xceb450 00:11:15.081 [2024-07-25 13:11:25.527388] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:15.081 [2024-07-25 13:11:25.527536] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb3b630 00:11:15.081 [2024-07-25 13:11:25.527669] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xceb450 00:11:15.081 [2024-07-25 13:11:25.527678] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0xceb450 00:11:15.081 [2024-07-25 13:11:25.527773] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.081 pt0 00:11:15.081 13:11:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:15.081 13:11:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:15.081 13:11:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:15.081 13:11:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # jq '.[].num_blocks' 00:11:15.341 [2024-07-25 13:11:25.741257] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # (( 196608 == 196608 )) 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 822135 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 822135 ']' 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 822135 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 822135 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 822135' 00:11:15.341 killing process with pid 822135 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 822135 00:11:15.341 [2024-07-25 13:11:25.815278] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.341 [2024-07-25 13:11:25.815320] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.341 [2024-07-25 13:11:25.815358] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.341 [2024-07-25 13:11:25.815368] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xceb450 name Raid, state offline 00:11:15.341 13:11:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 822135 00:11:15.600 [2024-07-25 13:11:25.893241] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.600 13:11:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:11:15.600 00:11:15.600 real 0m5.280s 00:11:15.600 user 0m8.592s 00:11:15.600 sys 0m1.118s 00:11:15.600 13:11:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.600 13:11:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.600 ************************************ 00:11:15.600 END TEST raid1_resize_superblock_test 00:11:15.600 ************************************ 00:11:15.859 13:11:26 bdev_raid -- bdev/bdev_raid.sh@935 -- # uname -s 00:11:15.859 13:11:26 bdev_raid -- bdev/bdev_raid.sh@935 -- # '[' Linux = Linux ']' 00:11:15.859 13:11:26 bdev_raid -- bdev/bdev_raid.sh@935 -- # modprobe -n nbd 00:11:15.859 13:11:26 bdev_raid -- bdev/bdev_raid.sh@936 -- # has_nbd=true 00:11:15.859 13:11:26 bdev_raid -- bdev/bdev_raid.sh@937 -- # modprobe nbd 00:11:15.859 13:11:26 bdev_raid -- bdev/bdev_raid.sh@938 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:11:15.859 13:11:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:15.859 13:11:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.859 13:11:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.859 ************************************ 00:11:15.859 START TEST raid_function_test_raid0 00:11:15.859 ************************************ 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=823194 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 823194' 00:11:15.859 Process raid pid: 823194 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 823194 /var/tmp/spdk-raid.sock 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 823194 ']' 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:15.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.859 13:11:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:15.859 [2024-07-25 13:11:26.289107] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:11:15.859 [2024-07-25 13:11:26.289260] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:16.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:16.118 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:16.118 [2024-07-25 13:11:26.500946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.118 [2024-07-25 13:11:26.583209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.377 [2024-07-25 13:11:26.637583] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.377 [2024-07-25 13:11:26.637608] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.942 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:16.942 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:11:16.942 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:11:16.942 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:11:16.942 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:11:16.942 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:11:16.942 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:11:16.942 [2024-07-25 13:11:27.380339] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:16.942 [2024-07-25 13:11:27.381685] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:16.942 [2024-07-25 13:11:27.381737] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b0dad0 00:11:16.942 [2024-07-25 13:11:27.381747] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:16.942 [2024-07-25 13:11:27.381920] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1970d00 00:11:16.942 [2024-07-25 13:11:27.382023] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b0dad0 00:11:16.942 [2024-07-25 13:11:27.382032] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x1b0dad0 00:11:16.942 [2024-07-25 13:11:27.382122] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.942 Base_1 00:11:16.942 Base_2 00:11:16.942 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:11:16.942 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:11:16.942 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:11:17.199 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:11:17.199 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:11:17.199 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:11:17.199 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:17.199 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:17.199 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:17.199 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:17.199 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:17.199 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:11:17.199 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:17.199 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:17.199 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:11:17.470 [2024-07-25 13:11:27.853600] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1950b30 00:11:17.470 /dev/nbd0 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.470 1+0 records in 00:11:17.470 1+0 records out 00:11:17.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028851 s, 14.2 MB/s 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:17.470 13:11:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:17.730 { 00:11:17.730 "nbd_device": "/dev/nbd0", 00:11:17.730 "bdev_name": "raid" 00:11:17.730 } 00:11:17.730 ]' 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:17.730 { 00:11:17.730 "nbd_device": "/dev/nbd0", 00:11:17.730 "bdev_name": "raid" 00:11:17.730 } 00:11:17.730 ]' 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:11:17.730 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:17.992 4096+0 records in 00:11:17.992 4096+0 records out 00:11:17.992 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0292349 s, 71.7 MB/s 00:11:17.992 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:18.252 4096+0 records in 00:11:18.252 4096+0 records out 00:11:18.252 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.27357 s, 7.7 MB/s 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:18.252 128+0 records in 00:11:18.252 128+0 records out 00:11:18.252 65536 bytes (66 kB, 64 KiB) copied, 0.000816339 s, 80.3 MB/s 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:18.252 2035+0 records in 00:11:18.252 2035+0 records out 00:11:18.252 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0116748 s, 89.2 MB/s 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:18.252 456+0 records in 00:11:18.252 456+0 records out 00:11:18.252 233472 bytes (233 kB, 228 KiB) copied, 0.00268342 s, 87.0 MB/s 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:18.252 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:11:18.512 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:18.512 [2024-07-25 13:11:28.853904] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.512 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:18.512 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:18.512 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:18.512 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:18.512 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:18.512 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:11:18.512 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:11:18.512 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:11:18.512 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:18.512 13:11:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 823194 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 823194 ']' 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 823194 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 823194 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 823194' 00:11:19.080 killing process with pid 823194 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 823194 00:11:19.080 [2024-07-25 13:11:29.483180] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.080 [2024-07-25 13:11:29.483241] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.080 [2024-07-25 13:11:29.483278] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.080 [2024-07-25 13:11:29.483289] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b0dad0 name raid, state offline 00:11:19.080 13:11:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 823194 00:11:19.080 [2024-07-25 13:11:29.498571] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.339 13:11:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:11:19.339 00:11:19.339 real 0m3.502s 00:11:19.339 user 0m4.662s 00:11:19.339 sys 0m1.330s 00:11:19.339 13:11:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.339 13:11:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:19.339 ************************************ 00:11:19.339 END TEST raid_function_test_raid0 00:11:19.339 ************************************ 00:11:19.339 13:11:29 bdev_raid -- bdev/bdev_raid.sh@939 -- # run_test raid_function_test_concat raid_function_test concat 00:11:19.339 13:11:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:19.339 13:11:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.339 13:11:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.339 ************************************ 00:11:19.339 START TEST raid_function_test_concat 00:11:19.339 ************************************ 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=823857 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 823857' 00:11:19.339 Process raid pid: 823857 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 823857 /var/tmp/spdk-raid.sock 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 823857 ']' 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:19.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.339 13:11:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:19.339 [2024-07-25 13:11:29.825977] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:11:19.339 [2024-07-25 13:11:29.826034] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.598 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:19.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.598 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:19.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.598 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:19.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.598 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:19.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.598 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:19.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.598 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:19.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.598 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:19.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.598 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:19.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:19.599 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:19.599 [2024-07-25 13:11:29.961505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.599 [2024-07-25 13:11:30.047610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.857 [2024-07-25 13:11:30.109193] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.857 [2024-07-25 13:11:30.109251] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.427 13:11:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.427 13:11:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:11:20.427 13:11:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:11:20.427 13:11:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:11:20.427 13:11:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:11:20.427 13:11:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:11:20.427 13:11:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:11:20.685 [2024-07-25 13:11:30.970099] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:20.685 [2024-07-25 13:11:30.971454] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:20.685 [2024-07-25 13:11:30.971514] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2119ad0 00:11:20.685 [2024-07-25 13:11:30.971524] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:20.685 [2024-07-25 13:11:30.971703] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f7cd00 00:11:20.685 [2024-07-25 13:11:30.971812] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2119ad0 00:11:20.686 [2024-07-25 13:11:30.971821] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x2119ad0 00:11:20.686 [2024-07-25 13:11:30.971913] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.686 Base_1 00:11:20.686 Base_2 00:11:20.686 13:11:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:11:20.686 13:11:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:11:20.686 13:11:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:11:20.944 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:11:20.944 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:11:20.944 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:11:20.944 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:20.944 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:20.944 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:20.944 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:20.944 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:20.944 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:11:20.944 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:20.944 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.944 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:11:21.203 [2024-07-25 13:11:31.435334] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f5d620 00:11:21.203 /dev/nbd0 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:21.203 1+0 records in 00:11:21.203 1+0 records out 00:11:21.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254251 s, 16.1 MB/s 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:21.203 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:21.463 { 00:11:21.463 "nbd_device": "/dev/nbd0", 00:11:21.463 "bdev_name": "raid" 00:11:21.463 } 00:11:21.463 ]' 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:21.463 { 00:11:21.463 "nbd_device": "/dev/nbd0", 00:11:21.463 "bdev_name": "raid" 00:11:21.463 } 00:11:21.463 ]' 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:21.463 4096+0 records in 00:11:21.463 4096+0 records out 00:11:21.463 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.027986 s, 74.9 MB/s 00:11:21.463 13:11:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:21.723 4096+0 records in 00:11:21.723 4096+0 records out 00:11:21.723 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.273386 s, 7.7 MB/s 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:21.723 128+0 records in 00:11:21.723 128+0 records out 00:11:21.723 65536 bytes (66 kB, 64 KiB) copied, 0.00082406 s, 79.5 MB/s 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:21.723 2035+0 records in 00:11:21.723 2035+0 records out 00:11:21.723 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0106923 s, 97.4 MB/s 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:21.723 456+0 records in 00:11:21.723 456+0 records out 00:11:21.723 233472 bytes (233 kB, 228 KiB) copied, 0.00273379 s, 85.4 MB/s 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.723 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:11:21.982 [2024-07-25 13:11:32.427885] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.982 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:21.982 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:21.982 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:21.982 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.982 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.982 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:21.982 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:11:21.982 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.982 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:11:21.982 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:21.982 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 823857 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 823857 ']' 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 823857 00:11:22.243 13:11:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:11:22.567 13:11:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:22.567 13:11:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 823857 00:11:22.567 13:11:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:22.567 13:11:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:22.567 13:11:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 823857' 00:11:22.567 killing process with pid 823857 00:11:22.567 13:11:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 823857 00:11:22.567 [2024-07-25 13:11:32.786665] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.567 [2024-07-25 13:11:32.786726] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.567 [2024-07-25 13:11:32.786765] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.567 [2024-07-25 13:11:32.786781] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2119ad0 name raid, state offline 00:11:22.567 13:11:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 823857 00:11:22.567 [2024-07-25 13:11:32.802458] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.567 13:11:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:11:22.567 00:11:22.567 real 0m3.226s 00:11:22.567 user 0m4.182s 00:11:22.567 sys 0m1.242s 00:11:22.567 13:11:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.567 13:11:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:22.567 ************************************ 00:11:22.567 END TEST raid_function_test_concat 00:11:22.567 ************************************ 00:11:22.567 13:11:33 bdev_raid -- bdev/bdev_raid.sh@942 -- # run_test raid0_resize_test raid_resize_test 0 00:11:22.567 13:11:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:22.567 13:11:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.567 13:11:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.826 ************************************ 00:11:22.826 START TEST raid0_resize_test 00:11:22.826 ************************************ 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=0 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=824472 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 824472' 00:11:22.826 Process raid pid: 824472 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 824472 /var/tmp/spdk-raid.sock 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 824472 ']' 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:22.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.826 13:11:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.826 [2024-07-25 13:11:33.125260] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:11:22.826 [2024-07-25 13:11:33.125318] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:22.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.826 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:22.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.827 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:22.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.827 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:22.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.827 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:22.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.827 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:22.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.827 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:22.827 [2024-07-25 13:11:33.260967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.085 [2024-07-25 13:11:33.347877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.085 [2024-07-25 13:11:33.407247] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.085 [2024-07-25 13:11:33.407281] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.654 13:11:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.654 13:11:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:11:23.654 13:11:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:11:23.913 Base_1 00:11:23.913 13:11:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:11:24.173 Base_2 00:11:24.173 13:11:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 0 -eq 0 ']' 00:11:24.173 13:11:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:11:24.173 [2024-07-25 13:11:34.635704] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:24.173 [2024-07-25 13:11:34.637093] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:24.173 [2024-07-25 13:11:34.637144] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x267fcc0 00:11:24.173 [2024-07-25 13:11:34.637154] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:24.173 [2024-07-25 13:11:34.637346] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21c3030 00:11:24.173 [2024-07-25 13:11:34.637430] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x267fcc0 00:11:24.173 [2024-07-25 13:11:34.637439] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x267fcc0 00:11:24.173 [2024-07-25 13:11:34.637530] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.173 13:11:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:11:24.432 [2024-07-25 13:11:34.860275] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:24.432 [2024-07-25 13:11:34.860292] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:24.432 true 00:11:24.432 13:11:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:24.432 13:11:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:11:24.691 [2024-07-25 13:11:35.076973] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.691 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=131072 00:11:24.691 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=64 00:11:24.691 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 0 -eq 0 ']' 00:11:24.691 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # expected_size=64 00:11:24.691 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 64 '!=' 64 ']' 00:11:24.691 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:11:24.950 [2024-07-25 13:11:35.305442] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:24.950 [2024-07-25 13:11:35.305461] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:24.950 [2024-07-25 13:11:35.305484] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:11:24.950 true 00:11:24.950 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:24.950 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:11:25.210 [2024-07-25 13:11:35.534178] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=262144 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=128 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 0 -eq 0 ']' 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@393 -- # expected_size=128 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 128 '!=' 128 ']' 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 824472 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 824472 ']' 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 824472 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 824472 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 824472' 00:11:25.210 killing process with pid 824472 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 824472 00:11:25.210 [2024-07-25 13:11:35.596729] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:25.210 [2024-07-25 13:11:35.596772] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.210 [2024-07-25 13:11:35.596810] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.210 [2024-07-25 13:11:35.596820] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x267fcc0 name Raid, state offline 00:11:25.210 13:11:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 824472 00:11:25.210 [2024-07-25 13:11:35.597997] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.470 13:11:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:11:25.470 00:11:25.470 real 0m2.701s 00:11:25.470 user 0m4.105s 00:11:25.470 sys 0m0.611s 00:11:25.470 13:11:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.470 13:11:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.470 ************************************ 00:11:25.470 END TEST raid0_resize_test 00:11:25.470 ************************************ 00:11:25.470 13:11:35 bdev_raid -- bdev/bdev_raid.sh@943 -- # run_test raid1_resize_test raid_resize_test 1 00:11:25.470 13:11:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:25.470 13:11:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.470 13:11:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.470 ************************************ 00:11:25.470 START TEST raid1_resize_test 00:11:25.470 ************************************ 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=1 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=825036 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 825036' 00:11:25.470 Process raid pid: 825036 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 825036 /var/tmp/spdk-raid.sock 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 825036 ']' 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:25.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.470 13:11:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.470 [2024-07-25 13:11:35.909815] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:11:25.470 [2024-07-25 13:11:35.909874] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:25.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.730 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:25.730 [2024-07-25 13:11:36.041938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.730 [2024-07-25 13:11:36.126100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.730 [2024-07-25 13:11:36.186866] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.730 [2024-07-25 13:11:36.186901] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.299 13:11:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.299 13:11:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:11:26.299 13:11:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:11:26.558 Base_1 00:11:26.558 13:11:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:11:26.817 Base_2 00:11:26.817 13:11:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 1 -eq 0 ']' 00:11:26.817 13:11:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@367 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r 1 -b 'Base_1 Base_2' -n Raid 00:11:27.076 [2024-07-25 13:11:37.387263] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:27.077 [2024-07-25 13:11:37.388643] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:27.077 [2024-07-25 13:11:37.388690] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x24f9cc0 00:11:27.077 [2024-07-25 13:11:37.388699] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:27.077 [2024-07-25 13:11:37.388890] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x203d030 00:11:27.077 [2024-07-25 13:11:37.388978] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24f9cc0 00:11:27.077 [2024-07-25 13:11:37.388986] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x24f9cc0 00:11:27.077 [2024-07-25 13:11:37.389077] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.077 13:11:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:11:27.336 [2024-07-25 13:11:37.607827] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:27.336 [2024-07-25 13:11:37.607847] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:27.336 true 00:11:27.336 13:11:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:27.336 13:11:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:11:27.336 [2024-07-25 13:11:37.820520] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.595 13:11:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=65536 00:11:27.595 13:11:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=32 00:11:27.595 13:11:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 1 -eq 0 ']' 00:11:27.595 13:11:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@379 -- # expected_size=32 00:11:27.595 13:11:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 32 '!=' 32 ']' 00:11:27.595 13:11:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:11:27.595 [2024-07-25 13:11:38.048971] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:27.595 [2024-07-25 13:11:38.048990] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:27.595 [2024-07-25 13:11:38.049013] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:11:27.595 true 00:11:27.596 13:11:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:27.596 13:11:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:11:27.855 [2024-07-25 13:11:38.277707] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=131072 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=64 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 1 -eq 0 ']' 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@395 -- # expected_size=64 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 64 '!=' 64 ']' 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 825036 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 825036 ']' 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 825036 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 825036 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 825036' 00:11:27.855 killing process with pid 825036 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 825036 00:11:27.855 [2024-07-25 13:11:38.334474] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.855 [2024-07-25 13:11:38.334520] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.855 13:11:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 825036 00:11:27.855 [2024-07-25 13:11:38.334845] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.855 [2024-07-25 13:11:38.334856] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24f9cc0 name Raid, state offline 00:11:27.855 [2024-07-25 13:11:38.335767] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.115 13:11:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:11:28.115 00:11:28.115 real 0m2.660s 00:11:28.115 user 0m4.036s 00:11:28.115 sys 0m0.594s 00:11:28.115 13:11:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.115 13:11:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.115 ************************************ 00:11:28.115 END TEST raid1_resize_test 00:11:28.115 ************************************ 00:11:28.115 13:11:38 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:11:28.115 13:11:38 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:11:28.115 13:11:38 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:11:28.115 13:11:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:28.115 13:11:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.115 13:11:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:28.115 ************************************ 00:11:28.115 START TEST raid_state_function_test 00:11:28.115 ************************************ 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=825530 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 825530' 00:11:28.115 Process raid pid: 825530 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 825530 /var/tmp/spdk-raid.sock 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 825530 ']' 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:28.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.115 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.375 [2024-07-25 13:11:38.651917] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:11:28.375 [2024-07-25 13:11:38.651975] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:28.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.375 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:28.375 [2024-07-25 13:11:38.783467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.634 [2024-07-25 13:11:38.869220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.634 [2024-07-25 13:11:38.926874] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.634 [2024-07-25 13:11:38.926907] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.203 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.203 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:29.203 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:29.463 [2024-07-25 13:11:39.764891] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:29.463 [2024-07-25 13:11:39.764929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:29.463 [2024-07-25 13:11:39.764943] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:29.463 [2024-07-25 13:11:39.764954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:29.463 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:29.463 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:29.463 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:29.463 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:29.463 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:29.463 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:29.463 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:29.463 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:29.463 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:29.463 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:29.463 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.463 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.723 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.723 "name": "Existed_Raid", 00:11:29.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.723 "strip_size_kb": 64, 00:11:29.723 "state": "configuring", 00:11:29.723 "raid_level": "raid0", 00:11:29.723 "superblock": false, 00:11:29.723 "num_base_bdevs": 2, 00:11:29.723 "num_base_bdevs_discovered": 0, 00:11:29.723 "num_base_bdevs_operational": 2, 00:11:29.723 "base_bdevs_list": [ 00:11:29.723 { 00:11:29.723 "name": "BaseBdev1", 00:11:29.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.723 "is_configured": false, 00:11:29.723 "data_offset": 0, 00:11:29.723 "data_size": 0 00:11:29.723 }, 00:11:29.723 { 00:11:29.723 "name": "BaseBdev2", 00:11:29.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.723 "is_configured": false, 00:11:29.723 "data_offset": 0, 00:11:29.723 "data_size": 0 00:11:29.723 } 00:11:29.723 ] 00:11:29.723 }' 00:11:29.723 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.723 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.291 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:30.291 [2024-07-25 13:11:40.739394] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.291 [2024-07-25 13:11:40.739427] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1cf2f20 name Existed_Raid, state configuring 00:11:30.291 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:30.550 [2024-07-25 13:11:40.972003] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.550 [2024-07-25 13:11:40.972029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.550 [2024-07-25 13:11:40.972038] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.550 [2024-07-25 13:11:40.972048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.550 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:30.810 [2024-07-25 13:11:41.210051] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.810 BaseBdev1 00:11:30.810 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:30.810 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:30.810 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:30.810 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:30.810 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:30.810 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:30.810 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:31.069 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.329 [ 00:11:31.329 { 00:11:31.329 "name": "BaseBdev1", 00:11:31.329 "aliases": [ 00:11:31.329 "d15348b9-9de9-43d4-a876-6e4e9b2afeaa" 00:11:31.329 ], 00:11:31.329 "product_name": "Malloc disk", 00:11:31.329 "block_size": 512, 00:11:31.329 "num_blocks": 65536, 00:11:31.329 "uuid": "d15348b9-9de9-43d4-a876-6e4e9b2afeaa", 00:11:31.329 "assigned_rate_limits": { 00:11:31.329 "rw_ios_per_sec": 0, 00:11:31.329 "rw_mbytes_per_sec": 0, 00:11:31.329 "r_mbytes_per_sec": 0, 00:11:31.329 "w_mbytes_per_sec": 0 00:11:31.329 }, 00:11:31.329 "claimed": true, 00:11:31.329 "claim_type": "exclusive_write", 00:11:31.329 "zoned": false, 00:11:31.329 "supported_io_types": { 00:11:31.329 "read": true, 00:11:31.329 "write": true, 00:11:31.329 "unmap": true, 00:11:31.329 "flush": true, 00:11:31.329 "reset": true, 00:11:31.329 "nvme_admin": false, 00:11:31.329 "nvme_io": false, 00:11:31.329 "nvme_io_md": false, 00:11:31.329 "write_zeroes": true, 00:11:31.329 "zcopy": true, 00:11:31.329 "get_zone_info": false, 00:11:31.329 "zone_management": false, 00:11:31.329 "zone_append": false, 00:11:31.329 "compare": false, 00:11:31.329 "compare_and_write": false, 00:11:31.329 "abort": true, 00:11:31.329 "seek_hole": false, 00:11:31.329 "seek_data": false, 00:11:31.329 "copy": true, 00:11:31.329 "nvme_iov_md": false 00:11:31.329 }, 00:11:31.329 "memory_domains": [ 00:11:31.329 { 00:11:31.329 "dma_device_id": "system", 00:11:31.329 "dma_device_type": 1 00:11:31.329 }, 00:11:31.329 { 00:11:31.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.329 "dma_device_type": 2 00:11:31.329 } 00:11:31.329 ], 00:11:31.329 "driver_specific": {} 00:11:31.329 } 00:11:31.329 ] 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.329 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.589 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.589 "name": "Existed_Raid", 00:11:31.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.589 "strip_size_kb": 64, 00:11:31.589 "state": "configuring", 00:11:31.589 "raid_level": "raid0", 00:11:31.589 "superblock": false, 00:11:31.589 "num_base_bdevs": 2, 00:11:31.589 "num_base_bdevs_discovered": 1, 00:11:31.589 "num_base_bdevs_operational": 2, 00:11:31.589 "base_bdevs_list": [ 00:11:31.589 { 00:11:31.589 "name": "BaseBdev1", 00:11:31.589 "uuid": "d15348b9-9de9-43d4-a876-6e4e9b2afeaa", 00:11:31.589 "is_configured": true, 00:11:31.589 "data_offset": 0, 00:11:31.589 "data_size": 65536 00:11:31.589 }, 00:11:31.589 { 00:11:31.589 "name": "BaseBdev2", 00:11:31.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.589 "is_configured": false, 00:11:31.589 "data_offset": 0, 00:11:31.589 "data_size": 0 00:11:31.589 } 00:11:31.589 ] 00:11:31.589 }' 00:11:31.589 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.589 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.157 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:32.417 [2024-07-25 13:11:42.669894] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.417 [2024-07-25 13:11:42.669929] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1cf2810 name Existed_Raid, state configuring 00:11:32.417 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:32.417 [2024-07-25 13:11:42.902523] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.417 [2024-07-25 13:11:42.903890] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.417 [2024-07-25 13:11:42.903920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.677 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.677 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:32.677 "name": "Existed_Raid", 00:11:32.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.677 "strip_size_kb": 64, 00:11:32.677 "state": "configuring", 00:11:32.677 "raid_level": "raid0", 00:11:32.677 "superblock": false, 00:11:32.677 "num_base_bdevs": 2, 00:11:32.677 "num_base_bdevs_discovered": 1, 00:11:32.677 "num_base_bdevs_operational": 2, 00:11:32.677 "base_bdevs_list": [ 00:11:32.677 { 00:11:32.677 "name": "BaseBdev1", 00:11:32.677 "uuid": "d15348b9-9de9-43d4-a876-6e4e9b2afeaa", 00:11:32.677 "is_configured": true, 00:11:32.677 "data_offset": 0, 00:11:32.677 "data_size": 65536 00:11:32.677 }, 00:11:32.677 { 00:11:32.677 "name": "BaseBdev2", 00:11:32.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.677 "is_configured": false, 00:11:32.677 "data_offset": 0, 00:11:32.677 "data_size": 0 00:11:32.677 } 00:11:32.677 ] 00:11:32.677 }' 00:11:32.677 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:32.677 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.615 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:33.615 [2024-07-25 13:11:43.976473] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.615 [2024-07-25 13:11:43.976506] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1cf3610 00:11:33.615 [2024-07-25 13:11:43.976514] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:33.615 [2024-07-25 13:11:43.976687] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1cdf690 00:11:33.615 [2024-07-25 13:11:43.976795] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1cf3610 00:11:33.615 [2024-07-25 13:11:43.976804] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1cf3610 00:11:33.615 [2024-07-25 13:11:43.976948] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.615 BaseBdev2 00:11:33.615 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:33.615 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:33.615 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:33.615 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:33.615 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:33.615 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:33.615 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:33.874 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:34.134 [ 00:11:34.134 { 00:11:34.134 "name": "BaseBdev2", 00:11:34.134 "aliases": [ 00:11:34.134 "63d4eff0-4d1f-456a-a1bb-7b2783eb4e26" 00:11:34.134 ], 00:11:34.134 "product_name": "Malloc disk", 00:11:34.134 "block_size": 512, 00:11:34.134 "num_blocks": 65536, 00:11:34.134 "uuid": "63d4eff0-4d1f-456a-a1bb-7b2783eb4e26", 00:11:34.134 "assigned_rate_limits": { 00:11:34.134 "rw_ios_per_sec": 0, 00:11:34.134 "rw_mbytes_per_sec": 0, 00:11:34.134 "r_mbytes_per_sec": 0, 00:11:34.134 "w_mbytes_per_sec": 0 00:11:34.134 }, 00:11:34.134 "claimed": true, 00:11:34.134 "claim_type": "exclusive_write", 00:11:34.134 "zoned": false, 00:11:34.134 "supported_io_types": { 00:11:34.134 "read": true, 00:11:34.134 "write": true, 00:11:34.134 "unmap": true, 00:11:34.134 "flush": true, 00:11:34.134 "reset": true, 00:11:34.134 "nvme_admin": false, 00:11:34.134 "nvme_io": false, 00:11:34.134 "nvme_io_md": false, 00:11:34.134 "write_zeroes": true, 00:11:34.134 "zcopy": true, 00:11:34.134 "get_zone_info": false, 00:11:34.134 "zone_management": false, 00:11:34.134 "zone_append": false, 00:11:34.134 "compare": false, 00:11:34.134 "compare_and_write": false, 00:11:34.134 "abort": true, 00:11:34.134 "seek_hole": false, 00:11:34.134 "seek_data": false, 00:11:34.134 "copy": true, 00:11:34.134 "nvme_iov_md": false 00:11:34.134 }, 00:11:34.134 "memory_domains": [ 00:11:34.134 { 00:11:34.134 "dma_device_id": "system", 00:11:34.134 "dma_device_type": 1 00:11:34.134 }, 00:11:34.134 { 00:11:34.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.134 "dma_device_type": 2 00:11:34.134 } 00:11:34.134 ], 00:11:34.134 "driver_specific": {} 00:11:34.134 } 00:11:34.134 ] 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.134 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.394 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.394 "name": "Existed_Raid", 00:11:34.394 "uuid": "2e5d47c8-a327-4090-9d2d-3780dbbe0737", 00:11:34.394 "strip_size_kb": 64, 00:11:34.394 "state": "online", 00:11:34.394 "raid_level": "raid0", 00:11:34.394 "superblock": false, 00:11:34.394 "num_base_bdevs": 2, 00:11:34.394 "num_base_bdevs_discovered": 2, 00:11:34.394 "num_base_bdevs_operational": 2, 00:11:34.394 "base_bdevs_list": [ 00:11:34.394 { 00:11:34.394 "name": "BaseBdev1", 00:11:34.394 "uuid": "d15348b9-9de9-43d4-a876-6e4e9b2afeaa", 00:11:34.394 "is_configured": true, 00:11:34.394 "data_offset": 0, 00:11:34.394 "data_size": 65536 00:11:34.394 }, 00:11:34.394 { 00:11:34.394 "name": "BaseBdev2", 00:11:34.394 "uuid": "63d4eff0-4d1f-456a-a1bb-7b2783eb4e26", 00:11:34.394 "is_configured": true, 00:11:34.394 "data_offset": 0, 00:11:34.394 "data_size": 65536 00:11:34.394 } 00:11:34.394 ] 00:11:34.394 }' 00:11:34.394 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.394 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.995 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:34.995 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:34.995 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:34.995 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:34.995 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:34.995 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:34.995 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:34.995 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:34.995 [2024-07-25 13:11:45.436778] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.995 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:34.995 "name": "Existed_Raid", 00:11:34.995 "aliases": [ 00:11:34.995 "2e5d47c8-a327-4090-9d2d-3780dbbe0737" 00:11:34.995 ], 00:11:34.995 "product_name": "Raid Volume", 00:11:34.995 "block_size": 512, 00:11:34.995 "num_blocks": 131072, 00:11:34.995 "uuid": "2e5d47c8-a327-4090-9d2d-3780dbbe0737", 00:11:34.995 "assigned_rate_limits": { 00:11:34.995 "rw_ios_per_sec": 0, 00:11:34.995 "rw_mbytes_per_sec": 0, 00:11:34.995 "r_mbytes_per_sec": 0, 00:11:34.995 "w_mbytes_per_sec": 0 00:11:34.995 }, 00:11:34.995 "claimed": false, 00:11:34.995 "zoned": false, 00:11:34.995 "supported_io_types": { 00:11:34.995 "read": true, 00:11:34.995 "write": true, 00:11:34.995 "unmap": true, 00:11:34.995 "flush": true, 00:11:34.995 "reset": true, 00:11:34.995 "nvme_admin": false, 00:11:34.995 "nvme_io": false, 00:11:34.995 "nvme_io_md": false, 00:11:34.995 "write_zeroes": true, 00:11:34.995 "zcopy": false, 00:11:34.995 "get_zone_info": false, 00:11:34.995 "zone_management": false, 00:11:34.995 "zone_append": false, 00:11:34.995 "compare": false, 00:11:34.995 "compare_and_write": false, 00:11:34.995 "abort": false, 00:11:34.995 "seek_hole": false, 00:11:34.995 "seek_data": false, 00:11:34.995 "copy": false, 00:11:34.995 "nvme_iov_md": false 00:11:34.995 }, 00:11:34.995 "memory_domains": [ 00:11:34.995 { 00:11:34.995 "dma_device_id": "system", 00:11:34.995 "dma_device_type": 1 00:11:34.995 }, 00:11:34.995 { 00:11:34.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.995 "dma_device_type": 2 00:11:34.995 }, 00:11:34.995 { 00:11:34.995 "dma_device_id": "system", 00:11:34.995 "dma_device_type": 1 00:11:34.995 }, 00:11:34.995 { 00:11:34.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.995 "dma_device_type": 2 00:11:34.995 } 00:11:34.995 ], 00:11:34.995 "driver_specific": { 00:11:34.995 "raid": { 00:11:34.995 "uuid": "2e5d47c8-a327-4090-9d2d-3780dbbe0737", 00:11:34.995 "strip_size_kb": 64, 00:11:34.995 "state": "online", 00:11:34.995 "raid_level": "raid0", 00:11:34.995 "superblock": false, 00:11:34.995 "num_base_bdevs": 2, 00:11:34.995 "num_base_bdevs_discovered": 2, 00:11:34.995 "num_base_bdevs_operational": 2, 00:11:34.995 "base_bdevs_list": [ 00:11:34.995 { 00:11:34.995 "name": "BaseBdev1", 00:11:34.995 "uuid": "d15348b9-9de9-43d4-a876-6e4e9b2afeaa", 00:11:34.995 "is_configured": true, 00:11:34.995 "data_offset": 0, 00:11:34.995 "data_size": 65536 00:11:34.995 }, 00:11:34.995 { 00:11:34.995 "name": "BaseBdev2", 00:11:34.995 "uuid": "63d4eff0-4d1f-456a-a1bb-7b2783eb4e26", 00:11:34.995 "is_configured": true, 00:11:34.995 "data_offset": 0, 00:11:34.995 "data_size": 65536 00:11:34.995 } 00:11:34.995 ] 00:11:34.995 } 00:11:34.995 } 00:11:34.995 }' 00:11:34.995 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.254 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:35.254 BaseBdev2' 00:11:35.255 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:35.255 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:35.255 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:35.255 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:35.255 "name": "BaseBdev1", 00:11:35.255 "aliases": [ 00:11:35.255 "d15348b9-9de9-43d4-a876-6e4e9b2afeaa" 00:11:35.255 ], 00:11:35.255 "product_name": "Malloc disk", 00:11:35.255 "block_size": 512, 00:11:35.255 "num_blocks": 65536, 00:11:35.255 "uuid": "d15348b9-9de9-43d4-a876-6e4e9b2afeaa", 00:11:35.255 "assigned_rate_limits": { 00:11:35.255 "rw_ios_per_sec": 0, 00:11:35.255 "rw_mbytes_per_sec": 0, 00:11:35.255 "r_mbytes_per_sec": 0, 00:11:35.255 "w_mbytes_per_sec": 0 00:11:35.255 }, 00:11:35.255 "claimed": true, 00:11:35.255 "claim_type": "exclusive_write", 00:11:35.255 "zoned": false, 00:11:35.255 "supported_io_types": { 00:11:35.255 "read": true, 00:11:35.255 "write": true, 00:11:35.255 "unmap": true, 00:11:35.255 "flush": true, 00:11:35.255 "reset": true, 00:11:35.255 "nvme_admin": false, 00:11:35.255 "nvme_io": false, 00:11:35.255 "nvme_io_md": false, 00:11:35.255 "write_zeroes": true, 00:11:35.255 "zcopy": true, 00:11:35.255 "get_zone_info": false, 00:11:35.255 "zone_management": false, 00:11:35.255 "zone_append": false, 00:11:35.255 "compare": false, 00:11:35.255 "compare_and_write": false, 00:11:35.255 "abort": true, 00:11:35.255 "seek_hole": false, 00:11:35.255 "seek_data": false, 00:11:35.255 "copy": true, 00:11:35.255 "nvme_iov_md": false 00:11:35.255 }, 00:11:35.255 "memory_domains": [ 00:11:35.255 { 00:11:35.255 "dma_device_id": "system", 00:11:35.255 "dma_device_type": 1 00:11:35.255 }, 00:11:35.255 { 00:11:35.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.255 "dma_device_type": 2 00:11:35.255 } 00:11:35.255 ], 00:11:35.255 "driver_specific": {} 00:11:35.255 }' 00:11:35.255 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.514 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.514 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:35.514 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.514 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.514 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:35.514 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.514 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.514 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:35.514 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.773 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.773 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:35.773 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:35.773 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:35.773 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:36.032 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:36.032 "name": "BaseBdev2", 00:11:36.032 "aliases": [ 00:11:36.032 "63d4eff0-4d1f-456a-a1bb-7b2783eb4e26" 00:11:36.032 ], 00:11:36.032 "product_name": "Malloc disk", 00:11:36.032 "block_size": 512, 00:11:36.032 "num_blocks": 65536, 00:11:36.032 "uuid": "63d4eff0-4d1f-456a-a1bb-7b2783eb4e26", 00:11:36.032 "assigned_rate_limits": { 00:11:36.032 "rw_ios_per_sec": 0, 00:11:36.032 "rw_mbytes_per_sec": 0, 00:11:36.032 "r_mbytes_per_sec": 0, 00:11:36.032 "w_mbytes_per_sec": 0 00:11:36.032 }, 00:11:36.032 "claimed": true, 00:11:36.032 "claim_type": "exclusive_write", 00:11:36.032 "zoned": false, 00:11:36.032 "supported_io_types": { 00:11:36.032 "read": true, 00:11:36.032 "write": true, 00:11:36.032 "unmap": true, 00:11:36.032 "flush": true, 00:11:36.032 "reset": true, 00:11:36.032 "nvme_admin": false, 00:11:36.032 "nvme_io": false, 00:11:36.032 "nvme_io_md": false, 00:11:36.032 "write_zeroes": true, 00:11:36.032 "zcopy": true, 00:11:36.032 "get_zone_info": false, 00:11:36.032 "zone_management": false, 00:11:36.032 "zone_append": false, 00:11:36.032 "compare": false, 00:11:36.032 "compare_and_write": false, 00:11:36.032 "abort": true, 00:11:36.032 "seek_hole": false, 00:11:36.032 "seek_data": false, 00:11:36.032 "copy": true, 00:11:36.032 "nvme_iov_md": false 00:11:36.032 }, 00:11:36.032 "memory_domains": [ 00:11:36.032 { 00:11:36.032 "dma_device_id": "system", 00:11:36.032 "dma_device_type": 1 00:11:36.032 }, 00:11:36.032 { 00:11:36.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.032 "dma_device_type": 2 00:11:36.032 } 00:11:36.032 ], 00:11:36.032 "driver_specific": {} 00:11:36.032 }' 00:11:36.032 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:36.032 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:36.032 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:36.032 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:36.032 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:36.032 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:36.032 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:36.032 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:36.292 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:36.292 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:36.292 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:36.292 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:36.292 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:36.552 [2024-07-25 13:11:46.848311] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.552 [2024-07-25 13:11:46.848342] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.552 [2024-07-25 13:11:46.848377] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.552 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.811 13:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:36.811 "name": "Existed_Raid", 00:11:36.811 "uuid": "2e5d47c8-a327-4090-9d2d-3780dbbe0737", 00:11:36.811 "strip_size_kb": 64, 00:11:36.811 "state": "offline", 00:11:36.811 "raid_level": "raid0", 00:11:36.811 "superblock": false, 00:11:36.811 "num_base_bdevs": 2, 00:11:36.811 "num_base_bdevs_discovered": 1, 00:11:36.811 "num_base_bdevs_operational": 1, 00:11:36.811 "base_bdevs_list": [ 00:11:36.811 { 00:11:36.811 "name": null, 00:11:36.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.811 "is_configured": false, 00:11:36.811 "data_offset": 0, 00:11:36.811 "data_size": 65536 00:11:36.811 }, 00:11:36.811 { 00:11:36.811 "name": "BaseBdev2", 00:11:36.811 "uuid": "63d4eff0-4d1f-456a-a1bb-7b2783eb4e26", 00:11:36.811 "is_configured": true, 00:11:36.811 "data_offset": 0, 00:11:36.811 "data_size": 65536 00:11:36.811 } 00:11:36.811 ] 00:11:36.811 }' 00:11:36.811 13:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:36.811 13:11:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.380 13:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:37.380 13:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:37.380 13:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.380 13:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:37.639 13:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:37.639 13:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.639 13:11:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:37.639 [2024-07-25 13:11:48.096649] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.639 [2024-07-25 13:11:48.096691] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1cf3610 name Existed_Raid, state offline 00:11:37.639 13:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:37.639 13:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:37.639 13:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.898 13:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.898 13:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:37.898 13:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:37.898 13:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:37.898 13:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 825530 00:11:37.898 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 825530 ']' 00:11:37.898 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 825530 00:11:37.898 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:37.898 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.898 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 825530 00:11:38.158 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:38.158 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:38.158 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 825530' 00:11:38.158 killing process with pid 825530 00:11:38.158 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 825530 00:11:38.158 [2024-07-25 13:11:48.411306] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.158 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 825530 00:11:38.158 [2024-07-25 13:11:48.412144] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.158 13:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:38.158 00:11:38.158 real 0m10.010s 00:11:38.158 user 0m17.781s 00:11:38.158 sys 0m1.862s 00:11:38.158 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.158 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.158 ************************************ 00:11:38.158 END TEST raid_state_function_test 00:11:38.158 ************************************ 00:11:38.158 13:11:48 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:11:38.158 13:11:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:38.418 13:11:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.418 13:11:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.418 ************************************ 00:11:38.418 START TEST raid_state_function_test_sb 00:11:38.418 ************************************ 00:11:38.418 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:11:38.418 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:38.418 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=827421 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 827421' 00:11:38.419 Process raid pid: 827421 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 827421 /var/tmp/spdk-raid.sock 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 827421 ']' 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:38.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:38.419 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.419 [2024-07-25 13:11:48.749907] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:11:38.419 [2024-07-25 13:11:48.749966] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:38.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.419 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:38.419 [2024-07-25 13:11:48.885507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.679 [2024-07-25 13:11:48.968485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.679 [2024-07-25 13:11:49.020604] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.679 [2024-07-25 13:11:49.020632] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.248 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.248 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:39.248 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:39.508 [2024-07-25 13:11:49.854052] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.508 [2024-07-25 13:11:49.854093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.508 [2024-07-25 13:11:49.854103] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:39.508 [2024-07-25 13:11:49.854114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:39.508 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:39.508 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:39.508 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:39.508 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:39.508 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:39.508 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:39.508 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.508 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.508 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.508 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.508 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.508 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.768 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.768 "name": "Existed_Raid", 00:11:39.768 "uuid": "f2bdca24-f3c5-4e04-94b4-95e4e66be713", 00:11:39.768 "strip_size_kb": 64, 00:11:39.768 "state": "configuring", 00:11:39.768 "raid_level": "raid0", 00:11:39.768 "superblock": true, 00:11:39.768 "num_base_bdevs": 2, 00:11:39.768 "num_base_bdevs_discovered": 0, 00:11:39.768 "num_base_bdevs_operational": 2, 00:11:39.768 "base_bdevs_list": [ 00:11:39.768 { 00:11:39.768 "name": "BaseBdev1", 00:11:39.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.768 "is_configured": false, 00:11:39.768 "data_offset": 0, 00:11:39.768 "data_size": 0 00:11:39.768 }, 00:11:39.768 { 00:11:39.768 "name": "BaseBdev2", 00:11:39.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.768 "is_configured": false, 00:11:39.768 "data_offset": 0, 00:11:39.768 "data_size": 0 00:11:39.768 } 00:11:39.768 ] 00:11:39.768 }' 00:11:39.768 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.768 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.337 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:40.597 [2024-07-25 13:11:50.896688] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.597 [2024-07-25 13:11:50.896721] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x105ff20 name Existed_Raid, state configuring 00:11:40.597 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:40.856 [2024-07-25 13:11:51.117286] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.856 [2024-07-25 13:11:51.117313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.856 [2024-07-25 13:11:51.117321] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.856 [2024-07-25 13:11:51.117332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.856 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.115 [2024-07-25 13:11:51.351391] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.115 BaseBdev1 00:11:41.115 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:41.115 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:41.115 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.115 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:41.115 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.115 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.115 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:41.115 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.374 [ 00:11:41.374 { 00:11:41.374 "name": "BaseBdev1", 00:11:41.374 "aliases": [ 00:11:41.374 "ca46eb6a-8b0d-4c9e-8d5c-4427af89d15d" 00:11:41.374 ], 00:11:41.374 "product_name": "Malloc disk", 00:11:41.374 "block_size": 512, 00:11:41.374 "num_blocks": 65536, 00:11:41.374 "uuid": "ca46eb6a-8b0d-4c9e-8d5c-4427af89d15d", 00:11:41.374 "assigned_rate_limits": { 00:11:41.374 "rw_ios_per_sec": 0, 00:11:41.374 "rw_mbytes_per_sec": 0, 00:11:41.374 "r_mbytes_per_sec": 0, 00:11:41.375 "w_mbytes_per_sec": 0 00:11:41.375 }, 00:11:41.375 "claimed": true, 00:11:41.375 "claim_type": "exclusive_write", 00:11:41.375 "zoned": false, 00:11:41.375 "supported_io_types": { 00:11:41.375 "read": true, 00:11:41.375 "write": true, 00:11:41.375 "unmap": true, 00:11:41.375 "flush": true, 00:11:41.375 "reset": true, 00:11:41.375 "nvme_admin": false, 00:11:41.375 "nvme_io": false, 00:11:41.375 "nvme_io_md": false, 00:11:41.375 "write_zeroes": true, 00:11:41.375 "zcopy": true, 00:11:41.375 "get_zone_info": false, 00:11:41.375 "zone_management": false, 00:11:41.375 "zone_append": false, 00:11:41.375 "compare": false, 00:11:41.375 "compare_and_write": false, 00:11:41.375 "abort": true, 00:11:41.375 "seek_hole": false, 00:11:41.375 "seek_data": false, 00:11:41.375 "copy": true, 00:11:41.375 "nvme_iov_md": false 00:11:41.375 }, 00:11:41.375 "memory_domains": [ 00:11:41.375 { 00:11:41.375 "dma_device_id": "system", 00:11:41.375 "dma_device_type": 1 00:11:41.375 }, 00:11:41.375 { 00:11:41.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.375 "dma_device_type": 2 00:11:41.375 } 00:11:41.375 ], 00:11:41.375 "driver_specific": {} 00:11:41.375 } 00:11:41.375 ] 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.375 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.634 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.634 "name": "Existed_Raid", 00:11:41.634 "uuid": "ce940859-4337-4731-86fe-f82213f8e683", 00:11:41.634 "strip_size_kb": 64, 00:11:41.634 "state": "configuring", 00:11:41.634 "raid_level": "raid0", 00:11:41.634 "superblock": true, 00:11:41.634 "num_base_bdevs": 2, 00:11:41.634 "num_base_bdevs_discovered": 1, 00:11:41.634 "num_base_bdevs_operational": 2, 00:11:41.634 "base_bdevs_list": [ 00:11:41.634 { 00:11:41.634 "name": "BaseBdev1", 00:11:41.634 "uuid": "ca46eb6a-8b0d-4c9e-8d5c-4427af89d15d", 00:11:41.634 "is_configured": true, 00:11:41.635 "data_offset": 2048, 00:11:41.635 "data_size": 63488 00:11:41.635 }, 00:11:41.635 { 00:11:41.635 "name": "BaseBdev2", 00:11:41.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.635 "is_configured": false, 00:11:41.635 "data_offset": 0, 00:11:41.635 "data_size": 0 00:11:41.635 } 00:11:41.635 ] 00:11:41.635 }' 00:11:41.635 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.635 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.203 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:42.462 [2024-07-25 13:11:52.819299] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.462 [2024-07-25 13:11:52.819336] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x105f810 name Existed_Raid, state configuring 00:11:42.462 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:42.721 [2024-07-25 13:11:53.043927] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.721 [2024-07-25 13:11:53.045315] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.721 [2024-07-25 13:11:53.045348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.986 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:42.986 "name": "Existed_Raid", 00:11:42.986 "uuid": "988de60d-8581-4604-9a06-c78f333e0748", 00:11:42.986 "strip_size_kb": 64, 00:11:42.986 "state": "configuring", 00:11:42.986 "raid_level": "raid0", 00:11:42.986 "superblock": true, 00:11:42.986 "num_base_bdevs": 2, 00:11:42.986 "num_base_bdevs_discovered": 1, 00:11:42.986 "num_base_bdevs_operational": 2, 00:11:42.986 "base_bdevs_list": [ 00:11:42.986 { 00:11:42.986 "name": "BaseBdev1", 00:11:42.986 "uuid": "ca46eb6a-8b0d-4c9e-8d5c-4427af89d15d", 00:11:42.986 "is_configured": true, 00:11:42.986 "data_offset": 2048, 00:11:42.986 "data_size": 63488 00:11:42.986 }, 00:11:42.986 { 00:11:42.986 "name": "BaseBdev2", 00:11:42.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.986 "is_configured": false, 00:11:42.986 "data_offset": 0, 00:11:42.986 "data_size": 0 00:11:42.986 } 00:11:42.986 ] 00:11:42.986 }' 00:11:42.986 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:42.986 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.563 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:43.563 [2024-07-25 13:11:54.021630] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.563 [2024-07-25 13:11:54.021762] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1060610 00:11:43.563 [2024-07-25 13:11:54.021774] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:43.563 [2024-07-25 13:11:54.021932] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x104c690 00:11:43.563 [2024-07-25 13:11:54.022037] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1060610 00:11:43.563 [2024-07-25 13:11:54.022046] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1060610 00:11:43.563 [2024-07-25 13:11:54.022128] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.563 BaseBdev2 00:11:43.563 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:43.563 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:43.563 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:43.563 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:43.563 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:43.563 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:43.563 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:43.822 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.082 [ 00:11:44.082 { 00:11:44.082 "name": "BaseBdev2", 00:11:44.082 "aliases": [ 00:11:44.082 "5a09aa80-1293-485d-aac4-183d3b5dc4c8" 00:11:44.082 ], 00:11:44.082 "product_name": "Malloc disk", 00:11:44.082 "block_size": 512, 00:11:44.082 "num_blocks": 65536, 00:11:44.082 "uuid": "5a09aa80-1293-485d-aac4-183d3b5dc4c8", 00:11:44.082 "assigned_rate_limits": { 00:11:44.082 "rw_ios_per_sec": 0, 00:11:44.082 "rw_mbytes_per_sec": 0, 00:11:44.082 "r_mbytes_per_sec": 0, 00:11:44.082 "w_mbytes_per_sec": 0 00:11:44.082 }, 00:11:44.082 "claimed": true, 00:11:44.082 "claim_type": "exclusive_write", 00:11:44.082 "zoned": false, 00:11:44.082 "supported_io_types": { 00:11:44.082 "read": true, 00:11:44.082 "write": true, 00:11:44.082 "unmap": true, 00:11:44.082 "flush": true, 00:11:44.082 "reset": true, 00:11:44.082 "nvme_admin": false, 00:11:44.082 "nvme_io": false, 00:11:44.082 "nvme_io_md": false, 00:11:44.082 "write_zeroes": true, 00:11:44.082 "zcopy": true, 00:11:44.082 "get_zone_info": false, 00:11:44.082 "zone_management": false, 00:11:44.082 "zone_append": false, 00:11:44.082 "compare": false, 00:11:44.082 "compare_and_write": false, 00:11:44.082 "abort": true, 00:11:44.082 "seek_hole": false, 00:11:44.082 "seek_data": false, 00:11:44.082 "copy": true, 00:11:44.082 "nvme_iov_md": false 00:11:44.082 }, 00:11:44.082 "memory_domains": [ 00:11:44.082 { 00:11:44.082 "dma_device_id": "system", 00:11:44.082 "dma_device_type": 1 00:11:44.082 }, 00:11:44.082 { 00:11:44.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.082 "dma_device_type": 2 00:11:44.082 } 00:11:44.082 ], 00:11:44.082 "driver_specific": {} 00:11:44.082 } 00:11:44.082 ] 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.082 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.341 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:44.341 "name": "Existed_Raid", 00:11:44.341 "uuid": "988de60d-8581-4604-9a06-c78f333e0748", 00:11:44.341 "strip_size_kb": 64, 00:11:44.341 "state": "online", 00:11:44.341 "raid_level": "raid0", 00:11:44.341 "superblock": true, 00:11:44.341 "num_base_bdevs": 2, 00:11:44.341 "num_base_bdevs_discovered": 2, 00:11:44.341 "num_base_bdevs_operational": 2, 00:11:44.341 "base_bdevs_list": [ 00:11:44.341 { 00:11:44.341 "name": "BaseBdev1", 00:11:44.341 "uuid": "ca46eb6a-8b0d-4c9e-8d5c-4427af89d15d", 00:11:44.341 "is_configured": true, 00:11:44.341 "data_offset": 2048, 00:11:44.341 "data_size": 63488 00:11:44.341 }, 00:11:44.341 { 00:11:44.341 "name": "BaseBdev2", 00:11:44.341 "uuid": "5a09aa80-1293-485d-aac4-183d3b5dc4c8", 00:11:44.341 "is_configured": true, 00:11:44.341 "data_offset": 2048, 00:11:44.341 "data_size": 63488 00:11:44.341 } 00:11:44.341 ] 00:11:44.341 }' 00:11:44.341 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:44.341 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.910 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.910 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:44.910 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:44.910 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:44.910 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:44.910 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:44.910 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:44.910 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:45.170 [2024-07-25 13:11:55.437612] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.170 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:45.170 "name": "Existed_Raid", 00:11:45.170 "aliases": [ 00:11:45.170 "988de60d-8581-4604-9a06-c78f333e0748" 00:11:45.170 ], 00:11:45.170 "product_name": "Raid Volume", 00:11:45.170 "block_size": 512, 00:11:45.170 "num_blocks": 126976, 00:11:45.170 "uuid": "988de60d-8581-4604-9a06-c78f333e0748", 00:11:45.170 "assigned_rate_limits": { 00:11:45.170 "rw_ios_per_sec": 0, 00:11:45.170 "rw_mbytes_per_sec": 0, 00:11:45.170 "r_mbytes_per_sec": 0, 00:11:45.170 "w_mbytes_per_sec": 0 00:11:45.170 }, 00:11:45.170 "claimed": false, 00:11:45.170 "zoned": false, 00:11:45.170 "supported_io_types": { 00:11:45.170 "read": true, 00:11:45.170 "write": true, 00:11:45.170 "unmap": true, 00:11:45.170 "flush": true, 00:11:45.170 "reset": true, 00:11:45.170 "nvme_admin": false, 00:11:45.170 "nvme_io": false, 00:11:45.170 "nvme_io_md": false, 00:11:45.170 "write_zeroes": true, 00:11:45.170 "zcopy": false, 00:11:45.170 "get_zone_info": false, 00:11:45.170 "zone_management": false, 00:11:45.170 "zone_append": false, 00:11:45.170 "compare": false, 00:11:45.170 "compare_and_write": false, 00:11:45.170 "abort": false, 00:11:45.170 "seek_hole": false, 00:11:45.170 "seek_data": false, 00:11:45.170 "copy": false, 00:11:45.170 "nvme_iov_md": false 00:11:45.170 }, 00:11:45.170 "memory_domains": [ 00:11:45.170 { 00:11:45.170 "dma_device_id": "system", 00:11:45.170 "dma_device_type": 1 00:11:45.170 }, 00:11:45.170 { 00:11:45.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.170 "dma_device_type": 2 00:11:45.170 }, 00:11:45.170 { 00:11:45.170 "dma_device_id": "system", 00:11:45.170 "dma_device_type": 1 00:11:45.170 }, 00:11:45.170 { 00:11:45.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.170 "dma_device_type": 2 00:11:45.170 } 00:11:45.170 ], 00:11:45.170 "driver_specific": { 00:11:45.170 "raid": { 00:11:45.170 "uuid": "988de60d-8581-4604-9a06-c78f333e0748", 00:11:45.170 "strip_size_kb": 64, 00:11:45.170 "state": "online", 00:11:45.170 "raid_level": "raid0", 00:11:45.170 "superblock": true, 00:11:45.170 "num_base_bdevs": 2, 00:11:45.170 "num_base_bdevs_discovered": 2, 00:11:45.170 "num_base_bdevs_operational": 2, 00:11:45.170 "base_bdevs_list": [ 00:11:45.170 { 00:11:45.170 "name": "BaseBdev1", 00:11:45.170 "uuid": "ca46eb6a-8b0d-4c9e-8d5c-4427af89d15d", 00:11:45.170 "is_configured": true, 00:11:45.170 "data_offset": 2048, 00:11:45.170 "data_size": 63488 00:11:45.170 }, 00:11:45.170 { 00:11:45.170 "name": "BaseBdev2", 00:11:45.170 "uuid": "5a09aa80-1293-485d-aac4-183d3b5dc4c8", 00:11:45.170 "is_configured": true, 00:11:45.170 "data_offset": 2048, 00:11:45.170 "data_size": 63488 00:11:45.170 } 00:11:45.170 ] 00:11:45.170 } 00:11:45.170 } 00:11:45.170 }' 00:11:45.170 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.170 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:45.170 BaseBdev2' 00:11:45.170 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:45.170 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:45.170 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:45.430 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:45.430 "name": "BaseBdev1", 00:11:45.430 "aliases": [ 00:11:45.430 "ca46eb6a-8b0d-4c9e-8d5c-4427af89d15d" 00:11:45.430 ], 00:11:45.430 "product_name": "Malloc disk", 00:11:45.430 "block_size": 512, 00:11:45.430 "num_blocks": 65536, 00:11:45.430 "uuid": "ca46eb6a-8b0d-4c9e-8d5c-4427af89d15d", 00:11:45.430 "assigned_rate_limits": { 00:11:45.430 "rw_ios_per_sec": 0, 00:11:45.430 "rw_mbytes_per_sec": 0, 00:11:45.430 "r_mbytes_per_sec": 0, 00:11:45.430 "w_mbytes_per_sec": 0 00:11:45.430 }, 00:11:45.430 "claimed": true, 00:11:45.430 "claim_type": "exclusive_write", 00:11:45.430 "zoned": false, 00:11:45.430 "supported_io_types": { 00:11:45.430 "read": true, 00:11:45.430 "write": true, 00:11:45.430 "unmap": true, 00:11:45.430 "flush": true, 00:11:45.430 "reset": true, 00:11:45.430 "nvme_admin": false, 00:11:45.430 "nvme_io": false, 00:11:45.430 "nvme_io_md": false, 00:11:45.430 "write_zeroes": true, 00:11:45.430 "zcopy": true, 00:11:45.430 "get_zone_info": false, 00:11:45.430 "zone_management": false, 00:11:45.430 "zone_append": false, 00:11:45.430 "compare": false, 00:11:45.430 "compare_and_write": false, 00:11:45.430 "abort": true, 00:11:45.430 "seek_hole": false, 00:11:45.430 "seek_data": false, 00:11:45.430 "copy": true, 00:11:45.430 "nvme_iov_md": false 00:11:45.430 }, 00:11:45.430 "memory_domains": [ 00:11:45.430 { 00:11:45.430 "dma_device_id": "system", 00:11:45.430 "dma_device_type": 1 00:11:45.430 }, 00:11:45.430 { 00:11:45.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.430 "dma_device_type": 2 00:11:45.430 } 00:11:45.430 ], 00:11:45.430 "driver_specific": {} 00:11:45.430 }' 00:11:45.430 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.430 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.430 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:45.430 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.430 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.430 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:45.430 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.690 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.690 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:45.690 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.690 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.690 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:45.690 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:45.690 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:45.690 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:45.949 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:45.949 "name": "BaseBdev2", 00:11:45.949 "aliases": [ 00:11:45.949 "5a09aa80-1293-485d-aac4-183d3b5dc4c8" 00:11:45.949 ], 00:11:45.949 "product_name": "Malloc disk", 00:11:45.949 "block_size": 512, 00:11:45.949 "num_blocks": 65536, 00:11:45.949 "uuid": "5a09aa80-1293-485d-aac4-183d3b5dc4c8", 00:11:45.949 "assigned_rate_limits": { 00:11:45.949 "rw_ios_per_sec": 0, 00:11:45.949 "rw_mbytes_per_sec": 0, 00:11:45.949 "r_mbytes_per_sec": 0, 00:11:45.949 "w_mbytes_per_sec": 0 00:11:45.949 }, 00:11:45.949 "claimed": true, 00:11:45.949 "claim_type": "exclusive_write", 00:11:45.949 "zoned": false, 00:11:45.949 "supported_io_types": { 00:11:45.949 "read": true, 00:11:45.949 "write": true, 00:11:45.949 "unmap": true, 00:11:45.949 "flush": true, 00:11:45.949 "reset": true, 00:11:45.949 "nvme_admin": false, 00:11:45.949 "nvme_io": false, 00:11:45.950 "nvme_io_md": false, 00:11:45.950 "write_zeroes": true, 00:11:45.950 "zcopy": true, 00:11:45.950 "get_zone_info": false, 00:11:45.950 "zone_management": false, 00:11:45.950 "zone_append": false, 00:11:45.950 "compare": false, 00:11:45.950 "compare_and_write": false, 00:11:45.950 "abort": true, 00:11:45.950 "seek_hole": false, 00:11:45.950 "seek_data": false, 00:11:45.950 "copy": true, 00:11:45.950 "nvme_iov_md": false 00:11:45.950 }, 00:11:45.950 "memory_domains": [ 00:11:45.950 { 00:11:45.950 "dma_device_id": "system", 00:11:45.950 "dma_device_type": 1 00:11:45.950 }, 00:11:45.950 { 00:11:45.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.950 "dma_device_type": 2 00:11:45.950 } 00:11:45.950 ], 00:11:45.950 "driver_specific": {} 00:11:45.950 }' 00:11:45.950 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.950 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.950 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:45.950 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.950 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.209 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:46.209 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.209 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.209 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:46.209 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.209 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.209 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:46.209 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:46.469 [2024-07-25 13:11:56.837096] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.469 [2024-07-25 13:11:56.837122] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.469 [2024-07-25 13:11:56.837169] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.469 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.729 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:46.729 "name": "Existed_Raid", 00:11:46.729 "uuid": "988de60d-8581-4604-9a06-c78f333e0748", 00:11:46.729 "strip_size_kb": 64, 00:11:46.729 "state": "offline", 00:11:46.729 "raid_level": "raid0", 00:11:46.729 "superblock": true, 00:11:46.729 "num_base_bdevs": 2, 00:11:46.729 "num_base_bdevs_discovered": 1, 00:11:46.729 "num_base_bdevs_operational": 1, 00:11:46.729 "base_bdevs_list": [ 00:11:46.729 { 00:11:46.729 "name": null, 00:11:46.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.729 "is_configured": false, 00:11:46.729 "data_offset": 2048, 00:11:46.729 "data_size": 63488 00:11:46.729 }, 00:11:46.729 { 00:11:46.729 "name": "BaseBdev2", 00:11:46.729 "uuid": "5a09aa80-1293-485d-aac4-183d3b5dc4c8", 00:11:46.729 "is_configured": true, 00:11:46.729 "data_offset": 2048, 00:11:46.729 "data_size": 63488 00:11:46.729 } 00:11:46.729 ] 00:11:46.729 }' 00:11:46.729 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:46.729 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.297 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:47.297 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:47.297 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.297 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:47.564 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:47.564 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.564 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:47.895 [2024-07-25 13:11:58.089442] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:47.895 [2024-07-25 13:11:58.089491] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1060610 name Existed_Raid, state offline 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 827421 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 827421 ']' 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 827421 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.895 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 827421 00:11:48.155 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.155 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.155 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 827421' 00:11:48.155 killing process with pid 827421 00:11:48.155 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 827421 00:11:48.155 [2024-07-25 13:11:58.412203] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.155 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 827421 00:11:48.155 [2024-07-25 13:11:58.413051] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:48.155 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:48.155 00:11:48.155 real 0m9.921s 00:11:48.155 user 0m17.561s 00:11:48.155 sys 0m1.904s 00:11:48.155 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.155 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.155 ************************************ 00:11:48.155 END TEST raid_state_function_test_sb 00:11:48.155 ************************************ 00:11:48.415 13:11:58 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:11:48.415 13:11:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:48.415 13:11:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.415 13:11:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.415 ************************************ 00:11:48.415 START TEST raid_superblock_test 00:11:48.415 ************************************ 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=829389 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 829389 /var/tmp/spdk-raid.sock 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 829389 ']' 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:48.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:48.415 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.415 [2024-07-25 13:11:58.783796] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:11:48.415 [2024-07-25 13:11:58.783925] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829389 ] 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:48.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:48.683 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:48.683 [2024-07-25 13:11:58.985992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.683 [2024-07-25 13:11:59.068292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.683 [2024-07-25 13:11:59.133557] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.683 [2024-07-25 13:11:59.133600] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.253 13:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:49.253 13:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:49.253 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:11:49.253 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:49.253 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:11:49.253 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:11:49.253 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:49.253 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:49.253 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:11:49.253 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:49.253 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:49.511 malloc1 00:11:49.511 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:49.770 [2024-07-25 13:12:00.066323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:49.770 [2024-07-25 13:12:00.066373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.770 [2024-07-25 13:12:00.066393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21d82f0 00:11:49.770 [2024-07-25 13:12:00.066410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.770 [2024-07-25 13:12:00.067990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.770 [2024-07-25 13:12:00.068019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:49.770 pt1 00:11:49.770 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:11:49.770 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:49.770 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:11:49.770 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:11:49.770 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:49.770 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:49.770 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:11:49.770 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:49.770 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:50.029 malloc2 00:11:50.029 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:50.289 [2024-07-25 13:12:00.524086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:50.289 [2024-07-25 13:12:00.524130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.289 [2024-07-25 13:12:00.524154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x236ff70 00:11:50.289 [2024-07-25 13:12:00.524166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.289 [2024-07-25 13:12:00.525588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.289 [2024-07-25 13:12:00.525616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:50.289 pt2 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:11:50.289 [2024-07-25 13:12:00.752708] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:50.289 [2024-07-25 13:12:00.753892] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:50.289 [2024-07-25 13:12:00.754010] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2372760 00:11:50.289 [2024-07-25 13:12:00.754021] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:50.289 [2024-07-25 13:12:00.754216] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2375400 00:11:50.289 [2024-07-25 13:12:00.754336] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2372760 00:11:50.289 [2024-07-25 13:12:00.754345] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2372760 00:11:50.289 [2024-07-25 13:12:00.754451] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.289 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.548 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.548 "name": "raid_bdev1", 00:11:50.548 "uuid": "976f3eb5-2f8d-4b66-9083-eae4265b45cc", 00:11:50.548 "strip_size_kb": 64, 00:11:50.548 "state": "online", 00:11:50.548 "raid_level": "raid0", 00:11:50.548 "superblock": true, 00:11:50.548 "num_base_bdevs": 2, 00:11:50.549 "num_base_bdevs_discovered": 2, 00:11:50.549 "num_base_bdevs_operational": 2, 00:11:50.549 "base_bdevs_list": [ 00:11:50.549 { 00:11:50.549 "name": "pt1", 00:11:50.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.549 "is_configured": true, 00:11:50.549 "data_offset": 2048, 00:11:50.549 "data_size": 63488 00:11:50.549 }, 00:11:50.549 { 00:11:50.549 "name": "pt2", 00:11:50.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.549 "is_configured": true, 00:11:50.549 "data_offset": 2048, 00:11:50.549 "data_size": 63488 00:11:50.549 } 00:11:50.549 ] 00:11:50.549 }' 00:11:50.549 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.549 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.118 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:11:51.118 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:51.118 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:51.118 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:51.118 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:51.118 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:51.118 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:51.118 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:51.378 [2024-07-25 13:12:01.751543] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.378 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:51.378 "name": "raid_bdev1", 00:11:51.378 "aliases": [ 00:11:51.378 "976f3eb5-2f8d-4b66-9083-eae4265b45cc" 00:11:51.378 ], 00:11:51.378 "product_name": "Raid Volume", 00:11:51.378 "block_size": 512, 00:11:51.378 "num_blocks": 126976, 00:11:51.378 "uuid": "976f3eb5-2f8d-4b66-9083-eae4265b45cc", 00:11:51.378 "assigned_rate_limits": { 00:11:51.378 "rw_ios_per_sec": 0, 00:11:51.378 "rw_mbytes_per_sec": 0, 00:11:51.378 "r_mbytes_per_sec": 0, 00:11:51.378 "w_mbytes_per_sec": 0 00:11:51.378 }, 00:11:51.378 "claimed": false, 00:11:51.378 "zoned": false, 00:11:51.378 "supported_io_types": { 00:11:51.378 "read": true, 00:11:51.378 "write": true, 00:11:51.378 "unmap": true, 00:11:51.378 "flush": true, 00:11:51.378 "reset": true, 00:11:51.378 "nvme_admin": false, 00:11:51.378 "nvme_io": false, 00:11:51.378 "nvme_io_md": false, 00:11:51.378 "write_zeroes": true, 00:11:51.378 "zcopy": false, 00:11:51.378 "get_zone_info": false, 00:11:51.378 "zone_management": false, 00:11:51.378 "zone_append": false, 00:11:51.378 "compare": false, 00:11:51.378 "compare_and_write": false, 00:11:51.378 "abort": false, 00:11:51.378 "seek_hole": false, 00:11:51.378 "seek_data": false, 00:11:51.378 "copy": false, 00:11:51.378 "nvme_iov_md": false 00:11:51.378 }, 00:11:51.378 "memory_domains": [ 00:11:51.378 { 00:11:51.378 "dma_device_id": "system", 00:11:51.378 "dma_device_type": 1 00:11:51.378 }, 00:11:51.378 { 00:11:51.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.378 "dma_device_type": 2 00:11:51.378 }, 00:11:51.378 { 00:11:51.378 "dma_device_id": "system", 00:11:51.378 "dma_device_type": 1 00:11:51.378 }, 00:11:51.378 { 00:11:51.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.378 "dma_device_type": 2 00:11:51.378 } 00:11:51.378 ], 00:11:51.378 "driver_specific": { 00:11:51.378 "raid": { 00:11:51.378 "uuid": "976f3eb5-2f8d-4b66-9083-eae4265b45cc", 00:11:51.378 "strip_size_kb": 64, 00:11:51.378 "state": "online", 00:11:51.378 "raid_level": "raid0", 00:11:51.378 "superblock": true, 00:11:51.378 "num_base_bdevs": 2, 00:11:51.378 "num_base_bdevs_discovered": 2, 00:11:51.378 "num_base_bdevs_operational": 2, 00:11:51.378 "base_bdevs_list": [ 00:11:51.378 { 00:11:51.378 "name": "pt1", 00:11:51.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:51.378 "is_configured": true, 00:11:51.378 "data_offset": 2048, 00:11:51.378 "data_size": 63488 00:11:51.378 }, 00:11:51.378 { 00:11:51.378 "name": "pt2", 00:11:51.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.378 "is_configured": true, 00:11:51.378 "data_offset": 2048, 00:11:51.378 "data_size": 63488 00:11:51.378 } 00:11:51.378 ] 00:11:51.378 } 00:11:51.378 } 00:11:51.378 }' 00:11:51.378 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:51.378 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:51.378 pt2' 00:11:51.378 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:51.378 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:51.378 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:51.636 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:51.636 "name": "pt1", 00:11:51.636 "aliases": [ 00:11:51.636 "00000000-0000-0000-0000-000000000001" 00:11:51.636 ], 00:11:51.636 "product_name": "passthru", 00:11:51.636 "block_size": 512, 00:11:51.636 "num_blocks": 65536, 00:11:51.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:51.636 "assigned_rate_limits": { 00:11:51.636 "rw_ios_per_sec": 0, 00:11:51.636 "rw_mbytes_per_sec": 0, 00:11:51.636 "r_mbytes_per_sec": 0, 00:11:51.636 "w_mbytes_per_sec": 0 00:11:51.636 }, 00:11:51.636 "claimed": true, 00:11:51.636 "claim_type": "exclusive_write", 00:11:51.636 "zoned": false, 00:11:51.636 "supported_io_types": { 00:11:51.636 "read": true, 00:11:51.636 "write": true, 00:11:51.636 "unmap": true, 00:11:51.636 "flush": true, 00:11:51.636 "reset": true, 00:11:51.636 "nvme_admin": false, 00:11:51.636 "nvme_io": false, 00:11:51.636 "nvme_io_md": false, 00:11:51.636 "write_zeroes": true, 00:11:51.636 "zcopy": true, 00:11:51.636 "get_zone_info": false, 00:11:51.636 "zone_management": false, 00:11:51.636 "zone_append": false, 00:11:51.636 "compare": false, 00:11:51.636 "compare_and_write": false, 00:11:51.636 "abort": true, 00:11:51.636 "seek_hole": false, 00:11:51.636 "seek_data": false, 00:11:51.636 "copy": true, 00:11:51.636 "nvme_iov_md": false 00:11:51.636 }, 00:11:51.636 "memory_domains": [ 00:11:51.636 { 00:11:51.636 "dma_device_id": "system", 00:11:51.636 "dma_device_type": 1 00:11:51.636 }, 00:11:51.636 { 00:11:51.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.636 "dma_device_type": 2 00:11:51.636 } 00:11:51.636 ], 00:11:51.636 "driver_specific": { 00:11:51.636 "passthru": { 00:11:51.636 "name": "pt1", 00:11:51.636 "base_bdev_name": "malloc1" 00:11:51.636 } 00:11:51.636 } 00:11:51.636 }' 00:11:51.636 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.636 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.896 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:51.896 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.896 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.896 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:51.896 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.896 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.896 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:51.896 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.896 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:52.156 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:52.156 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:52.156 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:52.156 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:52.156 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:52.156 "name": "pt2", 00:11:52.156 "aliases": [ 00:11:52.156 "00000000-0000-0000-0000-000000000002" 00:11:52.156 ], 00:11:52.156 "product_name": "passthru", 00:11:52.156 "block_size": 512, 00:11:52.156 "num_blocks": 65536, 00:11:52.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.156 "assigned_rate_limits": { 00:11:52.156 "rw_ios_per_sec": 0, 00:11:52.156 "rw_mbytes_per_sec": 0, 00:11:52.156 "r_mbytes_per_sec": 0, 00:11:52.156 "w_mbytes_per_sec": 0 00:11:52.156 }, 00:11:52.156 "claimed": true, 00:11:52.156 "claim_type": "exclusive_write", 00:11:52.156 "zoned": false, 00:11:52.156 "supported_io_types": { 00:11:52.156 "read": true, 00:11:52.156 "write": true, 00:11:52.156 "unmap": true, 00:11:52.156 "flush": true, 00:11:52.156 "reset": true, 00:11:52.156 "nvme_admin": false, 00:11:52.156 "nvme_io": false, 00:11:52.156 "nvme_io_md": false, 00:11:52.156 "write_zeroes": true, 00:11:52.156 "zcopy": true, 00:11:52.156 "get_zone_info": false, 00:11:52.156 "zone_management": false, 00:11:52.156 "zone_append": false, 00:11:52.156 "compare": false, 00:11:52.156 "compare_and_write": false, 00:11:52.156 "abort": true, 00:11:52.156 "seek_hole": false, 00:11:52.156 "seek_data": false, 00:11:52.156 "copy": true, 00:11:52.156 "nvme_iov_md": false 00:11:52.156 }, 00:11:52.156 "memory_domains": [ 00:11:52.156 { 00:11:52.156 "dma_device_id": "system", 00:11:52.156 "dma_device_type": 1 00:11:52.156 }, 00:11:52.156 { 00:11:52.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.156 "dma_device_type": 2 00:11:52.156 } 00:11:52.156 ], 00:11:52.156 "driver_specific": { 00:11:52.156 "passthru": { 00:11:52.156 "name": "pt2", 00:11:52.156 "base_bdev_name": "malloc2" 00:11:52.156 } 00:11:52.156 } 00:11:52.156 }' 00:11:52.156 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:52.416 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:52.416 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:52.416 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:52.416 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:52.416 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:52.416 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:52.416 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:52.416 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:52.416 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:52.675 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:52.676 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:52.676 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:52.676 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:11:52.676 [2024-07-25 13:12:03.163265] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.935 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=976f3eb5-2f8d-4b66-9083-eae4265b45cc 00:11:52.935 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 976f3eb5-2f8d-4b66-9083-eae4265b45cc ']' 00:11:52.935 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:52.935 [2024-07-25 13:12:03.391632] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.935 [2024-07-25 13:12:03.391648] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.935 [2024-07-25 13:12:03.391693] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.935 [2024-07-25 13:12:03.391731] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.935 [2024-07-25 13:12:03.391741] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2372760 name raid_bdev1, state offline 00:11:52.935 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.935 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:11:53.194 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:11:53.194 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:11:53.194 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:11:53.194 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:53.452 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:11:53.452 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:53.710 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:53.710 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:53.969 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:11:54.228 [2024-07-25 13:12:04.538617] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:54.228 [2024-07-25 13:12:04.539867] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:54.228 [2024-07-25 13:12:04.539915] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:54.228 [2024-07-25 13:12:04.539952] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:54.228 [2024-07-25 13:12:04.539970] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.228 [2024-07-25 13:12:04.539979] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23729f0 name raid_bdev1, state configuring 00:11:54.228 request: 00:11:54.228 { 00:11:54.228 "name": "raid_bdev1", 00:11:54.228 "raid_level": "raid0", 00:11:54.228 "base_bdevs": [ 00:11:54.228 "malloc1", 00:11:54.228 "malloc2" 00:11:54.228 ], 00:11:54.228 "strip_size_kb": 64, 00:11:54.228 "superblock": false, 00:11:54.228 "method": "bdev_raid_create", 00:11:54.228 "req_id": 1 00:11:54.228 } 00:11:54.228 Got JSON-RPC error response 00:11:54.228 response: 00:11:54.228 { 00:11:54.228 "code": -17, 00:11:54.228 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:54.228 } 00:11:54.228 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:54.228 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:54.228 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:54.228 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:54.228 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.228 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:11:54.486 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:11:54.487 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:11:54.487 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:54.746 [2024-07-25 13:12:04.991752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:54.746 [2024-07-25 13:12:04.991793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.746 [2024-07-25 13:12:04.991809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x237bbf0 00:11:54.746 [2024-07-25 13:12:04.991821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.746 [2024-07-25 13:12:04.993290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.746 [2024-07-25 13:12:04.993319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:54.746 [2024-07-25 13:12:04.993378] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:54.746 [2024-07-25 13:12:04.993400] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:54.746 pt1 00:11:54.746 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:11:54.746 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:54.746 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:54.746 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:54.746 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:54.746 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:54.746 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.746 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.746 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.746 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.746 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.746 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.005 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:55.005 "name": "raid_bdev1", 00:11:55.005 "uuid": "976f3eb5-2f8d-4b66-9083-eae4265b45cc", 00:11:55.005 "strip_size_kb": 64, 00:11:55.005 "state": "configuring", 00:11:55.005 "raid_level": "raid0", 00:11:55.005 "superblock": true, 00:11:55.005 "num_base_bdevs": 2, 00:11:55.005 "num_base_bdevs_discovered": 1, 00:11:55.005 "num_base_bdevs_operational": 2, 00:11:55.005 "base_bdevs_list": [ 00:11:55.005 { 00:11:55.005 "name": "pt1", 00:11:55.005 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:55.005 "is_configured": true, 00:11:55.005 "data_offset": 2048, 00:11:55.005 "data_size": 63488 00:11:55.005 }, 00:11:55.005 { 00:11:55.005 "name": null, 00:11:55.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.005 "is_configured": false, 00:11:55.005 "data_offset": 2048, 00:11:55.005 "data_size": 63488 00:11:55.005 } 00:11:55.005 ] 00:11:55.005 }' 00:11:55.005 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:55.005 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.573 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:11:55.573 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:11:55.573 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:11:55.573 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:55.573 [2024-07-25 13:12:06.046546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:55.573 [2024-07-25 13:12:06.046592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.573 [2024-07-25 13:12:06.046608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2371a00 00:11:55.573 [2024-07-25 13:12:06.046620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.573 [2024-07-25 13:12:06.046924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.573 [2024-07-25 13:12:06.046942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:55.573 [2024-07-25 13:12:06.046995] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:55.573 [2024-07-25 13:12:06.047012] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:55.573 [2024-07-25 13:12:06.047098] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x21d6c30 00:11:55.573 [2024-07-25 13:12:06.047107] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:55.573 [2024-07-25 13:12:06.047271] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21d7af0 00:11:55.573 [2024-07-25 13:12:06.047380] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21d6c30 00:11:55.573 [2024-07-25 13:12:06.047389] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x21d6c30 00:11:55.573 [2024-07-25 13:12:06.047476] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.573 pt2 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.832 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:55.832 "name": "raid_bdev1", 00:11:55.832 "uuid": "976f3eb5-2f8d-4b66-9083-eae4265b45cc", 00:11:55.832 "strip_size_kb": 64, 00:11:55.832 "state": "online", 00:11:55.832 "raid_level": "raid0", 00:11:55.832 "superblock": true, 00:11:55.832 "num_base_bdevs": 2, 00:11:55.832 "num_base_bdevs_discovered": 2, 00:11:55.832 "num_base_bdevs_operational": 2, 00:11:55.832 "base_bdevs_list": [ 00:11:55.832 { 00:11:55.832 "name": "pt1", 00:11:55.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:55.832 "is_configured": true, 00:11:55.833 "data_offset": 2048, 00:11:55.833 "data_size": 63488 00:11:55.833 }, 00:11:55.833 { 00:11:55.833 "name": "pt2", 00:11:55.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.833 "is_configured": true, 00:11:55.833 "data_offset": 2048, 00:11:55.833 "data_size": 63488 00:11:55.833 } 00:11:55.833 ] 00:11:55.833 }' 00:11:55.833 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:55.833 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.400 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:11:56.400 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:56.400 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:56.400 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:56.400 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:56.400 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:56.400 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:56.400 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:56.659 [2024-07-25 13:12:06.977224] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.659 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:56.659 "name": "raid_bdev1", 00:11:56.659 "aliases": [ 00:11:56.659 "976f3eb5-2f8d-4b66-9083-eae4265b45cc" 00:11:56.659 ], 00:11:56.659 "product_name": "Raid Volume", 00:11:56.659 "block_size": 512, 00:11:56.659 "num_blocks": 126976, 00:11:56.659 "uuid": "976f3eb5-2f8d-4b66-9083-eae4265b45cc", 00:11:56.659 "assigned_rate_limits": { 00:11:56.659 "rw_ios_per_sec": 0, 00:11:56.659 "rw_mbytes_per_sec": 0, 00:11:56.659 "r_mbytes_per_sec": 0, 00:11:56.659 "w_mbytes_per_sec": 0 00:11:56.659 }, 00:11:56.659 "claimed": false, 00:11:56.659 "zoned": false, 00:11:56.659 "supported_io_types": { 00:11:56.659 "read": true, 00:11:56.659 "write": true, 00:11:56.659 "unmap": true, 00:11:56.659 "flush": true, 00:11:56.659 "reset": true, 00:11:56.659 "nvme_admin": false, 00:11:56.659 "nvme_io": false, 00:11:56.659 "nvme_io_md": false, 00:11:56.659 "write_zeroes": true, 00:11:56.659 "zcopy": false, 00:11:56.659 "get_zone_info": false, 00:11:56.659 "zone_management": false, 00:11:56.659 "zone_append": false, 00:11:56.659 "compare": false, 00:11:56.659 "compare_and_write": false, 00:11:56.659 "abort": false, 00:11:56.659 "seek_hole": false, 00:11:56.659 "seek_data": false, 00:11:56.659 "copy": false, 00:11:56.659 "nvme_iov_md": false 00:11:56.659 }, 00:11:56.659 "memory_domains": [ 00:11:56.659 { 00:11:56.659 "dma_device_id": "system", 00:11:56.659 "dma_device_type": 1 00:11:56.659 }, 00:11:56.659 { 00:11:56.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.659 "dma_device_type": 2 00:11:56.659 }, 00:11:56.659 { 00:11:56.659 "dma_device_id": "system", 00:11:56.659 "dma_device_type": 1 00:11:56.659 }, 00:11:56.659 { 00:11:56.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.659 "dma_device_type": 2 00:11:56.659 } 00:11:56.659 ], 00:11:56.659 "driver_specific": { 00:11:56.659 "raid": { 00:11:56.659 "uuid": "976f3eb5-2f8d-4b66-9083-eae4265b45cc", 00:11:56.659 "strip_size_kb": 64, 00:11:56.659 "state": "online", 00:11:56.659 "raid_level": "raid0", 00:11:56.659 "superblock": true, 00:11:56.659 "num_base_bdevs": 2, 00:11:56.659 "num_base_bdevs_discovered": 2, 00:11:56.659 "num_base_bdevs_operational": 2, 00:11:56.659 "base_bdevs_list": [ 00:11:56.659 { 00:11:56.659 "name": "pt1", 00:11:56.659 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.659 "is_configured": true, 00:11:56.659 "data_offset": 2048, 00:11:56.659 "data_size": 63488 00:11:56.659 }, 00:11:56.659 { 00:11:56.659 "name": "pt2", 00:11:56.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.659 "is_configured": true, 00:11:56.659 "data_offset": 2048, 00:11:56.659 "data_size": 63488 00:11:56.659 } 00:11:56.659 ] 00:11:56.659 } 00:11:56.659 } 00:11:56.659 }' 00:11:56.659 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.659 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:56.659 pt2' 00:11:56.659 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:56.659 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:56.659 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:56.919 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.919 "name": "pt1", 00:11:56.919 "aliases": [ 00:11:56.919 "00000000-0000-0000-0000-000000000001" 00:11:56.919 ], 00:11:56.919 "product_name": "passthru", 00:11:56.919 "block_size": 512, 00:11:56.919 "num_blocks": 65536, 00:11:56.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.919 "assigned_rate_limits": { 00:11:56.919 "rw_ios_per_sec": 0, 00:11:56.919 "rw_mbytes_per_sec": 0, 00:11:56.919 "r_mbytes_per_sec": 0, 00:11:56.919 "w_mbytes_per_sec": 0 00:11:56.919 }, 00:11:56.919 "claimed": true, 00:11:56.919 "claim_type": "exclusive_write", 00:11:56.919 "zoned": false, 00:11:56.919 "supported_io_types": { 00:11:56.919 "read": true, 00:11:56.919 "write": true, 00:11:56.919 "unmap": true, 00:11:56.919 "flush": true, 00:11:56.919 "reset": true, 00:11:56.919 "nvme_admin": false, 00:11:56.919 "nvme_io": false, 00:11:56.919 "nvme_io_md": false, 00:11:56.919 "write_zeroes": true, 00:11:56.919 "zcopy": true, 00:11:56.919 "get_zone_info": false, 00:11:56.919 "zone_management": false, 00:11:56.919 "zone_append": false, 00:11:56.919 "compare": false, 00:11:56.919 "compare_and_write": false, 00:11:56.919 "abort": true, 00:11:56.919 "seek_hole": false, 00:11:56.919 "seek_data": false, 00:11:56.919 "copy": true, 00:11:56.919 "nvme_iov_md": false 00:11:56.919 }, 00:11:56.919 "memory_domains": [ 00:11:56.919 { 00:11:56.919 "dma_device_id": "system", 00:11:56.919 "dma_device_type": 1 00:11:56.919 }, 00:11:56.919 { 00:11:56.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.919 "dma_device_type": 2 00:11:56.919 } 00:11:56.919 ], 00:11:56.919 "driver_specific": { 00:11:56.919 "passthru": { 00:11:56.919 "name": "pt1", 00:11:56.919 "base_bdev_name": "malloc1" 00:11:56.919 } 00:11:56.919 } 00:11:56.919 }' 00:11:56.919 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.919 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.919 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:56.919 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.919 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.178 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:57.178 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.178 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.178 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:57.178 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.178 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.178 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:57.178 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:57.178 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:57.178 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:57.439 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:57.439 "name": "pt2", 00:11:57.439 "aliases": [ 00:11:57.439 "00000000-0000-0000-0000-000000000002" 00:11:57.439 ], 00:11:57.439 "product_name": "passthru", 00:11:57.439 "block_size": 512, 00:11:57.439 "num_blocks": 65536, 00:11:57.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.439 "assigned_rate_limits": { 00:11:57.439 "rw_ios_per_sec": 0, 00:11:57.439 "rw_mbytes_per_sec": 0, 00:11:57.439 "r_mbytes_per_sec": 0, 00:11:57.439 "w_mbytes_per_sec": 0 00:11:57.439 }, 00:11:57.439 "claimed": true, 00:11:57.439 "claim_type": "exclusive_write", 00:11:57.439 "zoned": false, 00:11:57.439 "supported_io_types": { 00:11:57.439 "read": true, 00:11:57.439 "write": true, 00:11:57.439 "unmap": true, 00:11:57.439 "flush": true, 00:11:57.439 "reset": true, 00:11:57.439 "nvme_admin": false, 00:11:57.439 "nvme_io": false, 00:11:57.439 "nvme_io_md": false, 00:11:57.439 "write_zeroes": true, 00:11:57.439 "zcopy": true, 00:11:57.439 "get_zone_info": false, 00:11:57.439 "zone_management": false, 00:11:57.439 "zone_append": false, 00:11:57.439 "compare": false, 00:11:57.439 "compare_and_write": false, 00:11:57.439 "abort": true, 00:11:57.439 "seek_hole": false, 00:11:57.439 "seek_data": false, 00:11:57.439 "copy": true, 00:11:57.439 "nvme_iov_md": false 00:11:57.439 }, 00:11:57.439 "memory_domains": [ 00:11:57.439 { 00:11:57.439 "dma_device_id": "system", 00:11:57.439 "dma_device_type": 1 00:11:57.439 }, 00:11:57.439 { 00:11:57.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.439 "dma_device_type": 2 00:11:57.439 } 00:11:57.439 ], 00:11:57.439 "driver_specific": { 00:11:57.439 "passthru": { 00:11:57.439 "name": "pt2", 00:11:57.439 "base_bdev_name": "malloc2" 00:11:57.439 } 00:11:57.439 } 00:11:57.439 }' 00:11:57.439 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.439 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.439 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:57.439 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.698 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.698 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:57.698 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.698 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.698 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:57.698 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.698 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.698 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:57.698 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:57.698 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:11:57.958 [2024-07-25 13:12:08.364878] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 976f3eb5-2f8d-4b66-9083-eae4265b45cc '!=' 976f3eb5-2f8d-4b66-9083-eae4265b45cc ']' 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 829389 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 829389 ']' 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 829389 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 829389 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 829389' 00:11:57.958 killing process with pid 829389 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 829389 00:11:57.958 [2024-07-25 13:12:08.443784] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.958 [2024-07-25 13:12:08.443833] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.958 [2024-07-25 13:12:08.443871] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.958 [2024-07-25 13:12:08.443881] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21d6c30 name raid_bdev1, state offline 00:11:57.958 13:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 829389 00:11:58.217 [2024-07-25 13:12:08.459563] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:58.217 13:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:11:58.217 00:11:58.217 real 0m9.972s 00:11:58.217 user 0m17.716s 00:11:58.217 sys 0m1.950s 00:11:58.217 13:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.217 13:12:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.217 ************************************ 00:11:58.217 END TEST raid_superblock_test 00:11:58.217 ************************************ 00:11:58.217 13:12:08 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:11:58.217 13:12:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:58.217 13:12:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.217 13:12:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:58.475 ************************************ 00:11:58.475 START TEST raid_read_error_test 00:11:58.475 ************************************ 00:11:58.475 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:11:58.475 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:11:58.475 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:11:58.475 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:11:58.475 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.oNO02uU0Go 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=831851 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 831851 /var/tmp/spdk-raid.sock 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 831851 ']' 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:58.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.476 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.476 [2024-07-25 13:12:08.801729] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:11:58.476 [2024-07-25 13:12:08.801789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831851 ] 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:58.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.476 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:58.476 [2024-07-25 13:12:08.937090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.735 [2024-07-25 13:12:09.017720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.735 [2024-07-25 13:12:09.077465] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.735 [2024-07-25 13:12:09.077519] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.301 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:59.301 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:59.301 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:59.301 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:59.560 BaseBdev1_malloc 00:11:59.560 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:59.560 true 00:11:59.560 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:59.819 [2024-07-25 13:12:10.122756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:59.819 [2024-07-25 13:12:10.122805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.819 [2024-07-25 13:12:10.122825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23cf1d0 00:11:59.819 [2024-07-25 13:12:10.122836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.819 [2024-07-25 13:12:10.124424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.819 [2024-07-25 13:12:10.124453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:59.819 BaseBdev1 00:11:59.819 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:59.819 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:59.819 BaseBdev2_malloc 00:12:00.078 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:00.078 true 00:12:00.078 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:00.382 [2024-07-25 13:12:10.620181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:00.382 [2024-07-25 13:12:10.620217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.382 [2024-07-25 13:12:10.620234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23d2710 00:12:00.382 [2024-07-25 13:12:10.620246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.382 [2024-07-25 13:12:10.621493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.382 [2024-07-25 13:12:10.621519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:00.382 BaseBdev2 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:00.382 [2024-07-25 13:12:10.788643] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.382 [2024-07-25 13:12:10.789715] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.382 [2024-07-25 13:12:10.789873] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x23d5bc0 00:12:00.382 [2024-07-25 13:12:10.789885] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:00.382 [2024-07-25 13:12:10.790042] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23d8870 00:12:00.382 [2024-07-25 13:12:10.790175] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23d5bc0 00:12:00.382 [2024-07-25 13:12:10.790185] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23d5bc0 00:12:00.382 [2024-07-25 13:12:10.790284] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.382 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.643 13:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:00.643 "name": "raid_bdev1", 00:12:00.643 "uuid": "679bb4a5-3ae7-4505-96f0-05274d076ddd", 00:12:00.643 "strip_size_kb": 64, 00:12:00.643 "state": "online", 00:12:00.643 "raid_level": "raid0", 00:12:00.643 "superblock": true, 00:12:00.643 "num_base_bdevs": 2, 00:12:00.643 "num_base_bdevs_discovered": 2, 00:12:00.643 "num_base_bdevs_operational": 2, 00:12:00.643 "base_bdevs_list": [ 00:12:00.643 { 00:12:00.643 "name": "BaseBdev1", 00:12:00.643 "uuid": "7a744bad-23e5-541a-9d70-dbf3157d0581", 00:12:00.643 "is_configured": true, 00:12:00.643 "data_offset": 2048, 00:12:00.643 "data_size": 63488 00:12:00.643 }, 00:12:00.643 { 00:12:00.643 "name": "BaseBdev2", 00:12:00.643 "uuid": "e5157680-c8c9-5e62-97e9-a7726a98e6ae", 00:12:00.643 "is_configured": true, 00:12:00.643 "data_offset": 2048, 00:12:00.643 "data_size": 63488 00:12:00.643 } 00:12:00.643 ] 00:12:00.643 }' 00:12:00.643 13:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:00.643 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.212 13:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:12:01.212 13:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:01.472 [2024-07-25 13:12:11.719367] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23d1260 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.412 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.671 13:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.671 "name": "raid_bdev1", 00:12:02.671 "uuid": "679bb4a5-3ae7-4505-96f0-05274d076ddd", 00:12:02.671 "strip_size_kb": 64, 00:12:02.671 "state": "online", 00:12:02.671 "raid_level": "raid0", 00:12:02.671 "superblock": true, 00:12:02.671 "num_base_bdevs": 2, 00:12:02.671 "num_base_bdevs_discovered": 2, 00:12:02.671 "num_base_bdevs_operational": 2, 00:12:02.671 "base_bdevs_list": [ 00:12:02.671 { 00:12:02.671 "name": "BaseBdev1", 00:12:02.671 "uuid": "7a744bad-23e5-541a-9d70-dbf3157d0581", 00:12:02.671 "is_configured": true, 00:12:02.671 "data_offset": 2048, 00:12:02.671 "data_size": 63488 00:12:02.671 }, 00:12:02.671 { 00:12:02.671 "name": "BaseBdev2", 00:12:02.671 "uuid": "e5157680-c8c9-5e62-97e9-a7726a98e6ae", 00:12:02.671 "is_configured": true, 00:12:02.671 "data_offset": 2048, 00:12:02.671 "data_size": 63488 00:12:02.671 } 00:12:02.671 ] 00:12:02.671 }' 00:12:02.671 13:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.671 13:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.240 13:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:03.500 [2024-07-25 13:12:13.809043] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.500 [2024-07-25 13:12:13.809083] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.500 [2024-07-25 13:12:13.812010] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.500 [2024-07-25 13:12:13.812038] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.500 [2024-07-25 13:12:13.812062] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.500 [2024-07-25 13:12:13.812072] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23d5bc0 name raid_bdev1, state offline 00:12:03.500 0 00:12:03.500 13:12:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 831851 00:12:03.500 13:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 831851 ']' 00:12:03.500 13:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 831851 00:12:03.500 13:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:03.500 13:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.500 13:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 831851 00:12:03.500 13:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:03.500 13:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:03.500 13:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 831851' 00:12:03.500 killing process with pid 831851 00:12:03.500 13:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 831851 00:12:03.500 [2024-07-25 13:12:13.885204] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.500 13:12:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 831851 00:12:03.500 [2024-07-25 13:12:13.894624] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.760 13:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.oNO02uU0Go 00:12:03.760 13:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:12:03.760 13:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:12:03.760 13:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.48 00:12:03.760 13:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:12:03.760 13:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:03.760 13:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:03.760 13:12:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.48 != \0\.\0\0 ]] 00:12:03.760 00:12:03.760 real 0m5.375s 00:12:03.760 user 0m8.187s 00:12:03.760 sys 0m0.970s 00:12:03.760 13:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.760 13:12:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.760 ************************************ 00:12:03.760 END TEST raid_read_error_test 00:12:03.760 ************************************ 00:12:03.760 13:12:14 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:12:03.760 13:12:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:03.760 13:12:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.760 13:12:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.760 ************************************ 00:12:03.760 START TEST raid_write_error_test 00:12:03.760 ************************************ 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:12:03.760 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:12:03.761 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:12:03.761 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.Dv2hDE8S8A 00:12:03.761 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:03.761 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=832892 00:12:03.761 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 832892 /var/tmp/spdk-raid.sock 00:12:03.761 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 832892 ']' 00:12:03.761 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:03.761 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.761 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:03.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:03.761 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.761 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.761 [2024-07-25 13:12:14.240843] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:12:03.761 [2024-07-25 13:12:14.240888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832892 ] 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.021 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.021 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.021 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.021 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.021 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.021 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.021 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.021 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.021 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.021 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.021 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.021 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:04.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:04.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:04.022 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:04.022 [2024-07-25 13:12:14.358769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.022 [2024-07-25 13:12:14.448471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.022 [2024-07-25 13:12:14.501782] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.022 [2024-07-25 13:12:14.501809] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.961 13:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.961 13:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:04.961 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:04.961 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:04.961 BaseBdev1_malloc 00:12:04.961 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:05.221 true 00:12:05.221 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:05.480 [2024-07-25 13:12:15.846277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:05.480 [2024-07-25 13:12:15.846318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.480 [2024-07-25 13:12:15.846336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e511d0 00:12:05.480 [2024-07-25 13:12:15.846347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.480 [2024-07-25 13:12:15.847936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.480 [2024-07-25 13:12:15.847964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.480 BaseBdev1 00:12:05.480 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:05.480 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:05.739 BaseBdev2_malloc 00:12:05.739 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:05.998 true 00:12:05.998 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:06.257 [2024-07-25 13:12:16.516398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:06.257 [2024-07-25 13:12:16.516437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.257 [2024-07-25 13:12:16.516455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e54710 00:12:06.257 [2024-07-25 13:12:16.516467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.257 [2024-07-25 13:12:16.517845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.257 [2024-07-25 13:12:16.517873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:06.257 BaseBdev2 00:12:06.257 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:06.257 [2024-07-25 13:12:16.741014] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.257 [2024-07-25 13:12:16.742232] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.257 [2024-07-25 13:12:16.742401] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e57bc0 00:12:06.257 [2024-07-25 13:12:16.742414] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:06.257 [2024-07-25 13:12:16.742592] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e5a870 00:12:06.257 [2024-07-25 13:12:16.742724] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e57bc0 00:12:06.257 [2024-07-25 13:12:16.742734] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e57bc0 00:12:06.257 [2024-07-25 13:12:16.742844] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:06.516 "name": "raid_bdev1", 00:12:06.516 "uuid": "5cf88874-607d-405c-9cd7-d25bfaf0282c", 00:12:06.516 "strip_size_kb": 64, 00:12:06.516 "state": "online", 00:12:06.516 "raid_level": "raid0", 00:12:06.516 "superblock": true, 00:12:06.516 "num_base_bdevs": 2, 00:12:06.516 "num_base_bdevs_discovered": 2, 00:12:06.516 "num_base_bdevs_operational": 2, 00:12:06.516 "base_bdevs_list": [ 00:12:06.516 { 00:12:06.516 "name": "BaseBdev1", 00:12:06.516 "uuid": "b4117e7e-c813-58b8-8fe5-9a06145fb0f6", 00:12:06.516 "is_configured": true, 00:12:06.516 "data_offset": 2048, 00:12:06.516 "data_size": 63488 00:12:06.516 }, 00:12:06.516 { 00:12:06.516 "name": "BaseBdev2", 00:12:06.516 "uuid": "3ab8757f-c44b-5e2f-9d86-2f2a6217646d", 00:12:06.516 "is_configured": true, 00:12:06.516 "data_offset": 2048, 00:12:06.516 "data_size": 63488 00:12:06.516 } 00:12:06.516 ] 00:12:06.516 }' 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:06.516 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.084 13:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:12:07.084 13:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:07.343 [2024-07-25 13:12:17.635744] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e53260 00:12:08.281 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:08.540 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:08.541 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.541 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.541 13:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:08.541 "name": "raid_bdev1", 00:12:08.541 "uuid": "5cf88874-607d-405c-9cd7-d25bfaf0282c", 00:12:08.541 "strip_size_kb": 64, 00:12:08.541 "state": "online", 00:12:08.541 "raid_level": "raid0", 00:12:08.541 "superblock": true, 00:12:08.541 "num_base_bdevs": 2, 00:12:08.541 "num_base_bdevs_discovered": 2, 00:12:08.541 "num_base_bdevs_operational": 2, 00:12:08.541 "base_bdevs_list": [ 00:12:08.541 { 00:12:08.541 "name": "BaseBdev1", 00:12:08.541 "uuid": "b4117e7e-c813-58b8-8fe5-9a06145fb0f6", 00:12:08.541 "is_configured": true, 00:12:08.541 "data_offset": 2048, 00:12:08.541 "data_size": 63488 00:12:08.541 }, 00:12:08.541 { 00:12:08.541 "name": "BaseBdev2", 00:12:08.541 "uuid": "3ab8757f-c44b-5e2f-9d86-2f2a6217646d", 00:12:08.541 "is_configured": true, 00:12:08.541 "data_offset": 2048, 00:12:08.541 "data_size": 63488 00:12:08.541 } 00:12:08.541 ] 00:12:08.541 }' 00:12:08.541 13:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:08.541 13:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.109 13:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:09.369 [2024-07-25 13:12:19.834575] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:09.369 [2024-07-25 13:12:19.834617] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.369 [2024-07-25 13:12:19.837530] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.369 [2024-07-25 13:12:19.837558] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.369 [2024-07-25 13:12:19.837582] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.369 [2024-07-25 13:12:19.837591] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e57bc0 name raid_bdev1, state offline 00:12:09.369 0 00:12:09.369 13:12:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 832892 00:12:09.369 13:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 832892 ']' 00:12:09.369 13:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 832892 00:12:09.627 13:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:09.627 13:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.627 13:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 832892 00:12:09.627 13:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.627 13:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.627 13:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 832892' 00:12:09.627 killing process with pid 832892 00:12:09.627 13:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 832892 00:12:09.627 [2024-07-25 13:12:19.910784] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.627 13:12:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 832892 00:12:09.627 [2024-07-25 13:12:19.920079] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.627 13:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.Dv2hDE8S8A 00:12:09.885 13:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:12:09.885 13:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:12:09.885 13:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:12:09.885 13:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:12:09.885 13:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:09.885 13:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:09.885 13:12:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:12:09.885 00:12:09.885 real 0m5.935s 00:12:09.885 user 0m9.295s 00:12:09.885 sys 0m0.988s 00:12:09.885 13:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.885 13:12:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.885 ************************************ 00:12:09.885 END TEST raid_write_error_test 00:12:09.885 ************************************ 00:12:09.885 13:12:20 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:12:09.885 13:12:20 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:12:09.885 13:12:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:09.885 13:12:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.885 13:12:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.885 ************************************ 00:12:09.885 START TEST raid_state_function_test 00:12:09.885 ************************************ 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=833941 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 833941' 00:12:09.885 Process raid pid: 833941 00:12:09.885 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:09.886 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 833941 /var/tmp/spdk-raid.sock 00:12:09.886 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 833941 ']' 00:12:09.886 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:09.886 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.886 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:09.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:09.886 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.886 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.886 [2024-07-25 13:12:20.276622] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:12:09.886 [2024-07-25 13:12:20.276680] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:09.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.886 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:10.144 [2024-07-25 13:12:20.408158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.144 [2024-07-25 13:12:20.494433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.144 [2024-07-25 13:12:20.562415] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.144 [2024-07-25 13:12:20.562451] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.713 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.713 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:10.713 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:10.972 [2024-07-25 13:12:21.385249] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.972 [2024-07-25 13:12:21.385290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.972 [2024-07-25 13:12:21.385300] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.972 [2024-07-25 13:12:21.385311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.972 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:10.972 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:10.972 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:10.972 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:10.972 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:10.972 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:10.972 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.972 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.972 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.972 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.972 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.972 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.232 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:11.232 "name": "Existed_Raid", 00:12:11.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.232 "strip_size_kb": 64, 00:12:11.232 "state": "configuring", 00:12:11.232 "raid_level": "concat", 00:12:11.232 "superblock": false, 00:12:11.232 "num_base_bdevs": 2, 00:12:11.232 "num_base_bdevs_discovered": 0, 00:12:11.232 "num_base_bdevs_operational": 2, 00:12:11.232 "base_bdevs_list": [ 00:12:11.232 { 00:12:11.232 "name": "BaseBdev1", 00:12:11.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.232 "is_configured": false, 00:12:11.232 "data_offset": 0, 00:12:11.232 "data_size": 0 00:12:11.232 }, 00:12:11.232 { 00:12:11.232 "name": "BaseBdev2", 00:12:11.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.232 "is_configured": false, 00:12:11.232 "data_offset": 0, 00:12:11.232 "data_size": 0 00:12:11.232 } 00:12:11.232 ] 00:12:11.232 }' 00:12:11.232 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:11.232 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.800 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:12.059 [2024-07-25 13:12:22.407819] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.059 [2024-07-25 13:12:22.407850] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe9af20 name Existed_Raid, state configuring 00:12:12.059 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:12.318 [2024-07-25 13:12:22.636429] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.318 [2024-07-25 13:12:22.636461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.318 [2024-07-25 13:12:22.636470] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.318 [2024-07-25 13:12:22.636481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.318 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:12.578 [2024-07-25 13:12:22.870574] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.578 BaseBdev1 00:12:12.578 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:12.578 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:12.578 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:12.578 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:12.578 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:12.578 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:12.578 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:12.837 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:12.837 [ 00:12:12.837 { 00:12:12.837 "name": "BaseBdev1", 00:12:12.837 "aliases": [ 00:12:12.837 "12278519-770f-408f-b175-07babcec3095" 00:12:12.837 ], 00:12:12.837 "product_name": "Malloc disk", 00:12:12.837 "block_size": 512, 00:12:12.837 "num_blocks": 65536, 00:12:12.837 "uuid": "12278519-770f-408f-b175-07babcec3095", 00:12:12.837 "assigned_rate_limits": { 00:12:12.837 "rw_ios_per_sec": 0, 00:12:12.837 "rw_mbytes_per_sec": 0, 00:12:12.837 "r_mbytes_per_sec": 0, 00:12:12.837 "w_mbytes_per_sec": 0 00:12:12.837 }, 00:12:12.837 "claimed": true, 00:12:12.837 "claim_type": "exclusive_write", 00:12:12.837 "zoned": false, 00:12:12.837 "supported_io_types": { 00:12:12.837 "read": true, 00:12:12.837 "write": true, 00:12:12.837 "unmap": true, 00:12:12.837 "flush": true, 00:12:12.837 "reset": true, 00:12:12.837 "nvme_admin": false, 00:12:12.837 "nvme_io": false, 00:12:12.837 "nvme_io_md": false, 00:12:12.837 "write_zeroes": true, 00:12:12.837 "zcopy": true, 00:12:12.837 "get_zone_info": false, 00:12:12.837 "zone_management": false, 00:12:12.837 "zone_append": false, 00:12:12.837 "compare": false, 00:12:12.837 "compare_and_write": false, 00:12:12.837 "abort": true, 00:12:12.837 "seek_hole": false, 00:12:12.837 "seek_data": false, 00:12:12.837 "copy": true, 00:12:12.837 "nvme_iov_md": false 00:12:12.837 }, 00:12:12.837 "memory_domains": [ 00:12:12.837 { 00:12:12.837 "dma_device_id": "system", 00:12:12.837 "dma_device_type": 1 00:12:12.837 }, 00:12:12.837 { 00:12:12.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.837 "dma_device_type": 2 00:12:12.837 } 00:12:12.837 ], 00:12:12.837 "driver_specific": {} 00:12:12.837 } 00:12:12.837 ] 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:13.096 "name": "Existed_Raid", 00:12:13.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.096 "strip_size_kb": 64, 00:12:13.096 "state": "configuring", 00:12:13.096 "raid_level": "concat", 00:12:13.096 "superblock": false, 00:12:13.096 "num_base_bdevs": 2, 00:12:13.096 "num_base_bdevs_discovered": 1, 00:12:13.096 "num_base_bdevs_operational": 2, 00:12:13.096 "base_bdevs_list": [ 00:12:13.096 { 00:12:13.096 "name": "BaseBdev1", 00:12:13.096 "uuid": "12278519-770f-408f-b175-07babcec3095", 00:12:13.096 "is_configured": true, 00:12:13.096 "data_offset": 0, 00:12:13.096 "data_size": 65536 00:12:13.096 }, 00:12:13.096 { 00:12:13.096 "name": "BaseBdev2", 00:12:13.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.096 "is_configured": false, 00:12:13.096 "data_offset": 0, 00:12:13.096 "data_size": 0 00:12:13.096 } 00:12:13.096 ] 00:12:13.096 }' 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:13.096 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.730 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:13.989 [2024-07-25 13:12:24.326535] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.989 [2024-07-25 13:12:24.326570] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe9a810 name Existed_Raid, state configuring 00:12:13.989 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:14.248 [2024-07-25 13:12:24.555239] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.248 [2024-07-25 13:12:24.556607] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:14.248 [2024-07-25 13:12:24.556637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:14.248 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:14.248 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:14.248 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:14.248 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:14.248 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:14.248 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:14.248 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:14.248 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:14.248 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:14.248 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:14.248 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:14.248 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:14.249 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.249 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.508 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:14.508 "name": "Existed_Raid", 00:12:14.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.508 "strip_size_kb": 64, 00:12:14.508 "state": "configuring", 00:12:14.508 "raid_level": "concat", 00:12:14.508 "superblock": false, 00:12:14.508 "num_base_bdevs": 2, 00:12:14.508 "num_base_bdevs_discovered": 1, 00:12:14.508 "num_base_bdevs_operational": 2, 00:12:14.508 "base_bdevs_list": [ 00:12:14.508 { 00:12:14.508 "name": "BaseBdev1", 00:12:14.508 "uuid": "12278519-770f-408f-b175-07babcec3095", 00:12:14.508 "is_configured": true, 00:12:14.508 "data_offset": 0, 00:12:14.508 "data_size": 65536 00:12:14.508 }, 00:12:14.508 { 00:12:14.508 "name": "BaseBdev2", 00:12:14.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.508 "is_configured": false, 00:12:14.508 "data_offset": 0, 00:12:14.508 "data_size": 0 00:12:14.508 } 00:12:14.508 ] 00:12:14.509 }' 00:12:14.509 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:14.509 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.078 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:15.336 [2024-07-25 13:12:25.613206] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.336 [2024-07-25 13:12:25.613237] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xe9b610 00:12:15.336 [2024-07-25 13:12:25.613244] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:15.336 [2024-07-25 13:12:25.613419] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe87690 00:12:15.336 [2024-07-25 13:12:25.613529] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe9b610 00:12:15.336 [2024-07-25 13:12:25.613538] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe9b610 00:12:15.336 [2024-07-25 13:12:25.613681] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.336 BaseBdev2 00:12:15.336 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:15.336 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:15.336 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:15.336 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:15.336 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:15.336 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:15.336 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:15.596 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:15.596 [ 00:12:15.596 { 00:12:15.596 "name": "BaseBdev2", 00:12:15.596 "aliases": [ 00:12:15.596 "4cab845a-f2aa-4557-a17f-1f59ae130a1b" 00:12:15.596 ], 00:12:15.596 "product_name": "Malloc disk", 00:12:15.596 "block_size": 512, 00:12:15.596 "num_blocks": 65536, 00:12:15.596 "uuid": "4cab845a-f2aa-4557-a17f-1f59ae130a1b", 00:12:15.596 "assigned_rate_limits": { 00:12:15.596 "rw_ios_per_sec": 0, 00:12:15.596 "rw_mbytes_per_sec": 0, 00:12:15.596 "r_mbytes_per_sec": 0, 00:12:15.596 "w_mbytes_per_sec": 0 00:12:15.596 }, 00:12:15.596 "claimed": true, 00:12:15.596 "claim_type": "exclusive_write", 00:12:15.596 "zoned": false, 00:12:15.596 "supported_io_types": { 00:12:15.596 "read": true, 00:12:15.596 "write": true, 00:12:15.596 "unmap": true, 00:12:15.596 "flush": true, 00:12:15.596 "reset": true, 00:12:15.596 "nvme_admin": false, 00:12:15.596 "nvme_io": false, 00:12:15.596 "nvme_io_md": false, 00:12:15.596 "write_zeroes": true, 00:12:15.596 "zcopy": true, 00:12:15.596 "get_zone_info": false, 00:12:15.596 "zone_management": false, 00:12:15.596 "zone_append": false, 00:12:15.596 "compare": false, 00:12:15.596 "compare_and_write": false, 00:12:15.596 "abort": true, 00:12:15.596 "seek_hole": false, 00:12:15.596 "seek_data": false, 00:12:15.596 "copy": true, 00:12:15.596 "nvme_iov_md": false 00:12:15.596 }, 00:12:15.596 "memory_domains": [ 00:12:15.596 { 00:12:15.596 "dma_device_id": "system", 00:12:15.596 "dma_device_type": 1 00:12:15.596 }, 00:12:15.596 { 00:12:15.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.596 "dma_device_type": 2 00:12:15.596 } 00:12:15.596 ], 00:12:15.596 "driver_specific": {} 00:12:15.596 } 00:12:15.596 ] 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:15.856 "name": "Existed_Raid", 00:12:15.856 "uuid": "9782e4e9-a8f4-4516-af97-63ae96b8e9c9", 00:12:15.856 "strip_size_kb": 64, 00:12:15.856 "state": "online", 00:12:15.856 "raid_level": "concat", 00:12:15.856 "superblock": false, 00:12:15.856 "num_base_bdevs": 2, 00:12:15.856 "num_base_bdevs_discovered": 2, 00:12:15.856 "num_base_bdevs_operational": 2, 00:12:15.856 "base_bdevs_list": [ 00:12:15.856 { 00:12:15.856 "name": "BaseBdev1", 00:12:15.856 "uuid": "12278519-770f-408f-b175-07babcec3095", 00:12:15.856 "is_configured": true, 00:12:15.856 "data_offset": 0, 00:12:15.856 "data_size": 65536 00:12:15.856 }, 00:12:15.856 { 00:12:15.856 "name": "BaseBdev2", 00:12:15.856 "uuid": "4cab845a-f2aa-4557-a17f-1f59ae130a1b", 00:12:15.856 "is_configured": true, 00:12:15.856 "data_offset": 0, 00:12:15.856 "data_size": 65536 00:12:15.856 } 00:12:15.856 ] 00:12:15.856 }' 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:15.856 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.424 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:16.424 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:16.424 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:16.424 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:16.424 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:16.424 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:16.424 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:16.424 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:16.684 [2024-07-25 13:12:27.089497] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.684 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:16.684 "name": "Existed_Raid", 00:12:16.684 "aliases": [ 00:12:16.684 "9782e4e9-a8f4-4516-af97-63ae96b8e9c9" 00:12:16.684 ], 00:12:16.684 "product_name": "Raid Volume", 00:12:16.684 "block_size": 512, 00:12:16.684 "num_blocks": 131072, 00:12:16.684 "uuid": "9782e4e9-a8f4-4516-af97-63ae96b8e9c9", 00:12:16.684 "assigned_rate_limits": { 00:12:16.684 "rw_ios_per_sec": 0, 00:12:16.684 "rw_mbytes_per_sec": 0, 00:12:16.684 "r_mbytes_per_sec": 0, 00:12:16.684 "w_mbytes_per_sec": 0 00:12:16.684 }, 00:12:16.684 "claimed": false, 00:12:16.684 "zoned": false, 00:12:16.684 "supported_io_types": { 00:12:16.684 "read": true, 00:12:16.684 "write": true, 00:12:16.684 "unmap": true, 00:12:16.684 "flush": true, 00:12:16.684 "reset": true, 00:12:16.684 "nvme_admin": false, 00:12:16.684 "nvme_io": false, 00:12:16.684 "nvme_io_md": false, 00:12:16.684 "write_zeroes": true, 00:12:16.684 "zcopy": false, 00:12:16.684 "get_zone_info": false, 00:12:16.684 "zone_management": false, 00:12:16.684 "zone_append": false, 00:12:16.684 "compare": false, 00:12:16.684 "compare_and_write": false, 00:12:16.684 "abort": false, 00:12:16.684 "seek_hole": false, 00:12:16.684 "seek_data": false, 00:12:16.684 "copy": false, 00:12:16.684 "nvme_iov_md": false 00:12:16.684 }, 00:12:16.684 "memory_domains": [ 00:12:16.684 { 00:12:16.684 "dma_device_id": "system", 00:12:16.684 "dma_device_type": 1 00:12:16.684 }, 00:12:16.684 { 00:12:16.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.684 "dma_device_type": 2 00:12:16.684 }, 00:12:16.684 { 00:12:16.684 "dma_device_id": "system", 00:12:16.684 "dma_device_type": 1 00:12:16.684 }, 00:12:16.684 { 00:12:16.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.684 "dma_device_type": 2 00:12:16.684 } 00:12:16.684 ], 00:12:16.684 "driver_specific": { 00:12:16.684 "raid": { 00:12:16.684 "uuid": "9782e4e9-a8f4-4516-af97-63ae96b8e9c9", 00:12:16.684 "strip_size_kb": 64, 00:12:16.684 "state": "online", 00:12:16.684 "raid_level": "concat", 00:12:16.684 "superblock": false, 00:12:16.684 "num_base_bdevs": 2, 00:12:16.684 "num_base_bdevs_discovered": 2, 00:12:16.684 "num_base_bdevs_operational": 2, 00:12:16.684 "base_bdevs_list": [ 00:12:16.684 { 00:12:16.684 "name": "BaseBdev1", 00:12:16.684 "uuid": "12278519-770f-408f-b175-07babcec3095", 00:12:16.684 "is_configured": true, 00:12:16.684 "data_offset": 0, 00:12:16.684 "data_size": 65536 00:12:16.684 }, 00:12:16.684 { 00:12:16.684 "name": "BaseBdev2", 00:12:16.684 "uuid": "4cab845a-f2aa-4557-a17f-1f59ae130a1b", 00:12:16.684 "is_configured": true, 00:12:16.684 "data_offset": 0, 00:12:16.684 "data_size": 65536 00:12:16.684 } 00:12:16.684 ] 00:12:16.684 } 00:12:16.684 } 00:12:16.684 }' 00:12:16.684 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.684 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:16.684 BaseBdev2' 00:12:16.684 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:16.684 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:16.684 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:16.944 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:16.944 "name": "BaseBdev1", 00:12:16.944 "aliases": [ 00:12:16.944 "12278519-770f-408f-b175-07babcec3095" 00:12:16.944 ], 00:12:16.944 "product_name": "Malloc disk", 00:12:16.944 "block_size": 512, 00:12:16.944 "num_blocks": 65536, 00:12:16.944 "uuid": "12278519-770f-408f-b175-07babcec3095", 00:12:16.944 "assigned_rate_limits": { 00:12:16.944 "rw_ios_per_sec": 0, 00:12:16.944 "rw_mbytes_per_sec": 0, 00:12:16.944 "r_mbytes_per_sec": 0, 00:12:16.944 "w_mbytes_per_sec": 0 00:12:16.944 }, 00:12:16.944 "claimed": true, 00:12:16.944 "claim_type": "exclusive_write", 00:12:16.944 "zoned": false, 00:12:16.944 "supported_io_types": { 00:12:16.944 "read": true, 00:12:16.944 "write": true, 00:12:16.944 "unmap": true, 00:12:16.944 "flush": true, 00:12:16.944 "reset": true, 00:12:16.944 "nvme_admin": false, 00:12:16.944 "nvme_io": false, 00:12:16.944 "nvme_io_md": false, 00:12:16.944 "write_zeroes": true, 00:12:16.944 "zcopy": true, 00:12:16.944 "get_zone_info": false, 00:12:16.944 "zone_management": false, 00:12:16.944 "zone_append": false, 00:12:16.944 "compare": false, 00:12:16.944 "compare_and_write": false, 00:12:16.944 "abort": true, 00:12:16.944 "seek_hole": false, 00:12:16.944 "seek_data": false, 00:12:16.944 "copy": true, 00:12:16.944 "nvme_iov_md": false 00:12:16.944 }, 00:12:16.944 "memory_domains": [ 00:12:16.944 { 00:12:16.944 "dma_device_id": "system", 00:12:16.944 "dma_device_type": 1 00:12:16.944 }, 00:12:16.944 { 00:12:16.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.944 "dma_device_type": 2 00:12:16.944 } 00:12:16.944 ], 00:12:16.944 "driver_specific": {} 00:12:16.944 }' 00:12:16.944 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.944 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.204 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:17.204 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.204 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.204 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:17.204 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.204 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.204 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:17.204 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.204 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.463 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:17.463 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:17.463 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:17.463 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:17.463 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:17.463 "name": "BaseBdev2", 00:12:17.463 "aliases": [ 00:12:17.463 "4cab845a-f2aa-4557-a17f-1f59ae130a1b" 00:12:17.463 ], 00:12:17.463 "product_name": "Malloc disk", 00:12:17.463 "block_size": 512, 00:12:17.463 "num_blocks": 65536, 00:12:17.463 "uuid": "4cab845a-f2aa-4557-a17f-1f59ae130a1b", 00:12:17.463 "assigned_rate_limits": { 00:12:17.463 "rw_ios_per_sec": 0, 00:12:17.463 "rw_mbytes_per_sec": 0, 00:12:17.463 "r_mbytes_per_sec": 0, 00:12:17.463 "w_mbytes_per_sec": 0 00:12:17.463 }, 00:12:17.463 "claimed": true, 00:12:17.463 "claim_type": "exclusive_write", 00:12:17.463 "zoned": false, 00:12:17.463 "supported_io_types": { 00:12:17.463 "read": true, 00:12:17.463 "write": true, 00:12:17.463 "unmap": true, 00:12:17.463 "flush": true, 00:12:17.463 "reset": true, 00:12:17.463 "nvme_admin": false, 00:12:17.463 "nvme_io": false, 00:12:17.463 "nvme_io_md": false, 00:12:17.463 "write_zeroes": true, 00:12:17.463 "zcopy": true, 00:12:17.463 "get_zone_info": false, 00:12:17.463 "zone_management": false, 00:12:17.463 "zone_append": false, 00:12:17.463 "compare": false, 00:12:17.463 "compare_and_write": false, 00:12:17.463 "abort": true, 00:12:17.463 "seek_hole": false, 00:12:17.463 "seek_data": false, 00:12:17.463 "copy": true, 00:12:17.463 "nvme_iov_md": false 00:12:17.463 }, 00:12:17.463 "memory_domains": [ 00:12:17.463 { 00:12:17.463 "dma_device_id": "system", 00:12:17.463 "dma_device_type": 1 00:12:17.463 }, 00:12:17.463 { 00:12:17.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.463 "dma_device_type": 2 00:12:17.463 } 00:12:17.463 ], 00:12:17.463 "driver_specific": {} 00:12:17.463 }' 00:12:17.463 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.723 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.723 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:17.723 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.723 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.723 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:17.723 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.723 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.723 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:17.723 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.982 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.982 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:17.982 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:17.982 [2024-07-25 13:12:28.464907] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.982 [2024-07-25 13:12:28.464929] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.982 [2024-07-25 13:12:28.464964] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:18.241 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.242 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.242 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:18.242 "name": "Existed_Raid", 00:12:18.242 "uuid": "9782e4e9-a8f4-4516-af97-63ae96b8e9c9", 00:12:18.242 "strip_size_kb": 64, 00:12:18.242 "state": "offline", 00:12:18.242 "raid_level": "concat", 00:12:18.242 "superblock": false, 00:12:18.242 "num_base_bdevs": 2, 00:12:18.242 "num_base_bdevs_discovered": 1, 00:12:18.242 "num_base_bdevs_operational": 1, 00:12:18.242 "base_bdevs_list": [ 00:12:18.242 { 00:12:18.242 "name": null, 00:12:18.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.242 "is_configured": false, 00:12:18.242 "data_offset": 0, 00:12:18.242 "data_size": 65536 00:12:18.242 }, 00:12:18.242 { 00:12:18.242 "name": "BaseBdev2", 00:12:18.242 "uuid": "4cab845a-f2aa-4557-a17f-1f59ae130a1b", 00:12:18.242 "is_configured": true, 00:12:18.242 "data_offset": 0, 00:12:18.242 "data_size": 65536 00:12:18.242 } 00:12:18.242 ] 00:12:18.242 }' 00:12:18.242 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:18.242 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.809 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:18.809 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:18.809 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.809 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:19.069 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:19.069 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:19.069 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:19.328 [2024-07-25 13:12:29.713145] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:19.328 [2024-07-25 13:12:29.713189] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe9b610 name Existed_Raid, state offline 00:12:19.328 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:19.328 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:19.328 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:19.328 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:19.588 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:19.588 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:19.588 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:12:19.588 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 833941 00:12:19.588 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 833941 ']' 00:12:19.588 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 833941 00:12:19.588 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:19.588 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.588 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 833941 00:12:19.588 13:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.588 13:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.588 13:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 833941' 00:12:19.588 killing process with pid 833941 00:12:19.588 13:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 833941 00:12:19.588 [2024-07-25 13:12:30.021280] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.588 13:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 833941 00:12:19.588 [2024-07-25 13:12:30.022129] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:19.847 00:12:19.847 real 0m10.000s 00:12:19.847 user 0m17.739s 00:12:19.847 sys 0m1.878s 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.847 ************************************ 00:12:19.847 END TEST raid_state_function_test 00:12:19.847 ************************************ 00:12:19.847 13:12:30 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:12:19.847 13:12:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:19.847 13:12:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.847 13:12:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.847 ************************************ 00:12:19.847 START TEST raid_state_function_test_sb 00:12:19.847 ************************************ 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=835973 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 835973' 00:12:19.847 Process raid pid: 835973 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 835973 /var/tmp/spdk-raid.sock 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 835973 ']' 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:19.847 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:19.848 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:19.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:19.848 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:19.848 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.107 [2024-07-25 13:12:30.365691] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:12:20.107 [2024-07-25 13:12:30.365747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:20.107 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:20.107 [2024-07-25 13:12:30.497686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.107 [2024-07-25 13:12:30.584820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.367 [2024-07-25 13:12:30.643558] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.367 [2024-07-25 13:12:30.643590] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.367 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.367 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:20.367 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:20.626 [2024-07-25 13:12:30.975958] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.626 [2024-07-25 13:12:30.975997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.626 [2024-07-25 13:12:30.976007] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.626 [2024-07-25 13:12:30.976019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.626 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:20.626 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:20.626 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:20.626 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:20.626 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:20.626 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:20.626 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:20.626 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:20.626 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:20.626 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:20.626 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.626 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.885 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:20.885 "name": "Existed_Raid", 00:12:20.885 "uuid": "db57f55d-d0db-4413-8579-4dec1e16760d", 00:12:20.885 "strip_size_kb": 64, 00:12:20.885 "state": "configuring", 00:12:20.885 "raid_level": "concat", 00:12:20.885 "superblock": true, 00:12:20.885 "num_base_bdevs": 2, 00:12:20.885 "num_base_bdevs_discovered": 0, 00:12:20.885 "num_base_bdevs_operational": 2, 00:12:20.885 "base_bdevs_list": [ 00:12:20.885 { 00:12:20.885 "name": "BaseBdev1", 00:12:20.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.885 "is_configured": false, 00:12:20.885 "data_offset": 0, 00:12:20.885 "data_size": 0 00:12:20.885 }, 00:12:20.885 { 00:12:20.885 "name": "BaseBdev2", 00:12:20.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.885 "is_configured": false, 00:12:20.885 "data_offset": 0, 00:12:20.885 "data_size": 0 00:12:20.885 } 00:12:20.885 ] 00:12:20.885 }' 00:12:20.885 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:20.885 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.453 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:21.711 [2024-07-25 13:12:31.954404] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:21.711 [2024-07-25 13:12:31.954438] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1954f20 name Existed_Raid, state configuring 00:12:21.711 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:21.711 [2024-07-25 13:12:32.179011] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:21.711 [2024-07-25 13:12:32.179039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:21.711 [2024-07-25 13:12:32.179048] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.711 [2024-07-25 13:12:32.179059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.711 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:21.971 [2024-07-25 13:12:32.413163] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.971 BaseBdev1 00:12:21.971 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:21.971 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:21.971 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:21.971 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:21.971 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:21.971 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:21.971 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:22.474 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:22.474 [ 00:12:22.474 { 00:12:22.474 "name": "BaseBdev1", 00:12:22.474 "aliases": [ 00:12:22.474 "6fd56feb-526b-4a3a-aebf-1e19a4b3d041" 00:12:22.474 ], 00:12:22.474 "product_name": "Malloc disk", 00:12:22.474 "block_size": 512, 00:12:22.474 "num_blocks": 65536, 00:12:22.474 "uuid": "6fd56feb-526b-4a3a-aebf-1e19a4b3d041", 00:12:22.475 "assigned_rate_limits": { 00:12:22.475 "rw_ios_per_sec": 0, 00:12:22.475 "rw_mbytes_per_sec": 0, 00:12:22.475 "r_mbytes_per_sec": 0, 00:12:22.475 "w_mbytes_per_sec": 0 00:12:22.475 }, 00:12:22.475 "claimed": true, 00:12:22.475 "claim_type": "exclusive_write", 00:12:22.475 "zoned": false, 00:12:22.475 "supported_io_types": { 00:12:22.475 "read": true, 00:12:22.475 "write": true, 00:12:22.475 "unmap": true, 00:12:22.475 "flush": true, 00:12:22.475 "reset": true, 00:12:22.475 "nvme_admin": false, 00:12:22.475 "nvme_io": false, 00:12:22.475 "nvme_io_md": false, 00:12:22.475 "write_zeroes": true, 00:12:22.475 "zcopy": true, 00:12:22.475 "get_zone_info": false, 00:12:22.475 "zone_management": false, 00:12:22.475 "zone_append": false, 00:12:22.475 "compare": false, 00:12:22.475 "compare_and_write": false, 00:12:22.475 "abort": true, 00:12:22.475 "seek_hole": false, 00:12:22.475 "seek_data": false, 00:12:22.475 "copy": true, 00:12:22.475 "nvme_iov_md": false 00:12:22.475 }, 00:12:22.475 "memory_domains": [ 00:12:22.475 { 00:12:22.475 "dma_device_id": "system", 00:12:22.475 "dma_device_type": 1 00:12:22.475 }, 00:12:22.475 { 00:12:22.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.475 "dma_device_type": 2 00:12:22.475 } 00:12:22.475 ], 00:12:22.475 "driver_specific": {} 00:12:22.475 } 00:12:22.475 ] 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.475 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:22.734 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:22.734 "name": "Existed_Raid", 00:12:22.734 "uuid": "9d8af632-5dae-4e64-afbf-cddd3b577bec", 00:12:22.734 "strip_size_kb": 64, 00:12:22.734 "state": "configuring", 00:12:22.734 "raid_level": "concat", 00:12:22.734 "superblock": true, 00:12:22.734 "num_base_bdevs": 2, 00:12:22.734 "num_base_bdevs_discovered": 1, 00:12:22.734 "num_base_bdevs_operational": 2, 00:12:22.734 "base_bdevs_list": [ 00:12:22.734 { 00:12:22.734 "name": "BaseBdev1", 00:12:22.734 "uuid": "6fd56feb-526b-4a3a-aebf-1e19a4b3d041", 00:12:22.734 "is_configured": true, 00:12:22.734 "data_offset": 2048, 00:12:22.734 "data_size": 63488 00:12:22.734 }, 00:12:22.734 { 00:12:22.734 "name": "BaseBdev2", 00:12:22.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.734 "is_configured": false, 00:12:22.734 "data_offset": 0, 00:12:22.734 "data_size": 0 00:12:22.734 } 00:12:22.734 ] 00:12:22.734 }' 00:12:22.734 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:22.734 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.307 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:23.567 [2024-07-25 13:12:33.885024] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.567 [2024-07-25 13:12:33.885068] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1954810 name Existed_Raid, state configuring 00:12:23.567 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:23.826 [2024-07-25 13:12:34.097621] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.826 [2024-07-25 13:12:34.099006] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:23.826 [2024-07-25 13:12:34.099037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:23.826 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:23.826 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:23.827 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:23.827 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:23.827 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:23.827 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:23.827 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:23.827 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:23.827 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.827 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.827 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.827 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.827 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.827 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.087 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.087 "name": "Existed_Raid", 00:12:24.087 "uuid": "66993b4b-bad6-41e9-8a98-2e6a2814bd4d", 00:12:24.087 "strip_size_kb": 64, 00:12:24.087 "state": "configuring", 00:12:24.087 "raid_level": "concat", 00:12:24.087 "superblock": true, 00:12:24.087 "num_base_bdevs": 2, 00:12:24.087 "num_base_bdevs_discovered": 1, 00:12:24.087 "num_base_bdevs_operational": 2, 00:12:24.087 "base_bdevs_list": [ 00:12:24.087 { 00:12:24.087 "name": "BaseBdev1", 00:12:24.087 "uuid": "6fd56feb-526b-4a3a-aebf-1e19a4b3d041", 00:12:24.087 "is_configured": true, 00:12:24.087 "data_offset": 2048, 00:12:24.087 "data_size": 63488 00:12:24.087 }, 00:12:24.087 { 00:12:24.087 "name": "BaseBdev2", 00:12:24.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.087 "is_configured": false, 00:12:24.087 "data_offset": 0, 00:12:24.087 "data_size": 0 00:12:24.087 } 00:12:24.087 ] 00:12:24.087 }' 00:12:24.087 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.087 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.655 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:24.914 [2024-07-25 13:12:35.143728] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.914 [2024-07-25 13:12:35.143873] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1955610 00:12:24.914 [2024-07-25 13:12:35.143887] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:24.914 [2024-07-25 13:12:35.144054] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1941690 00:12:24.914 [2024-07-25 13:12:35.144182] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1955610 00:12:24.914 [2024-07-25 13:12:35.144192] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1955610 00:12:24.914 [2024-07-25 13:12:35.144285] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.914 BaseBdev2 00:12:24.914 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:24.914 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:24.914 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:24.914 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:24.914 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:24.914 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:24.914 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:24.914 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:25.173 [ 00:12:25.173 { 00:12:25.173 "name": "BaseBdev2", 00:12:25.173 "aliases": [ 00:12:25.173 "231b73bc-866d-4b0d-9ad1-8bea2c69e432" 00:12:25.173 ], 00:12:25.173 "product_name": "Malloc disk", 00:12:25.173 "block_size": 512, 00:12:25.173 "num_blocks": 65536, 00:12:25.173 "uuid": "231b73bc-866d-4b0d-9ad1-8bea2c69e432", 00:12:25.173 "assigned_rate_limits": { 00:12:25.173 "rw_ios_per_sec": 0, 00:12:25.173 "rw_mbytes_per_sec": 0, 00:12:25.173 "r_mbytes_per_sec": 0, 00:12:25.173 "w_mbytes_per_sec": 0 00:12:25.173 }, 00:12:25.173 "claimed": true, 00:12:25.173 "claim_type": "exclusive_write", 00:12:25.173 "zoned": false, 00:12:25.173 "supported_io_types": { 00:12:25.173 "read": true, 00:12:25.173 "write": true, 00:12:25.173 "unmap": true, 00:12:25.173 "flush": true, 00:12:25.173 "reset": true, 00:12:25.173 "nvme_admin": false, 00:12:25.173 "nvme_io": false, 00:12:25.173 "nvme_io_md": false, 00:12:25.173 "write_zeroes": true, 00:12:25.173 "zcopy": true, 00:12:25.173 "get_zone_info": false, 00:12:25.173 "zone_management": false, 00:12:25.173 "zone_append": false, 00:12:25.173 "compare": false, 00:12:25.173 "compare_and_write": false, 00:12:25.173 "abort": true, 00:12:25.173 "seek_hole": false, 00:12:25.173 "seek_data": false, 00:12:25.173 "copy": true, 00:12:25.173 "nvme_iov_md": false 00:12:25.173 }, 00:12:25.173 "memory_domains": [ 00:12:25.173 { 00:12:25.173 "dma_device_id": "system", 00:12:25.173 "dma_device_type": 1 00:12:25.173 }, 00:12:25.173 { 00:12:25.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.173 "dma_device_type": 2 00:12:25.173 } 00:12:25.173 ], 00:12:25.173 "driver_specific": {} 00:12:25.173 } 00:12:25.173 ] 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.173 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.432 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:25.432 "name": "Existed_Raid", 00:12:25.432 "uuid": "66993b4b-bad6-41e9-8a98-2e6a2814bd4d", 00:12:25.432 "strip_size_kb": 64, 00:12:25.432 "state": "online", 00:12:25.432 "raid_level": "concat", 00:12:25.432 "superblock": true, 00:12:25.432 "num_base_bdevs": 2, 00:12:25.432 "num_base_bdevs_discovered": 2, 00:12:25.432 "num_base_bdevs_operational": 2, 00:12:25.432 "base_bdevs_list": [ 00:12:25.432 { 00:12:25.432 "name": "BaseBdev1", 00:12:25.432 "uuid": "6fd56feb-526b-4a3a-aebf-1e19a4b3d041", 00:12:25.432 "is_configured": true, 00:12:25.432 "data_offset": 2048, 00:12:25.432 "data_size": 63488 00:12:25.432 }, 00:12:25.432 { 00:12:25.432 "name": "BaseBdev2", 00:12:25.432 "uuid": "231b73bc-866d-4b0d-9ad1-8bea2c69e432", 00:12:25.432 "is_configured": true, 00:12:25.432 "data_offset": 2048, 00:12:25.432 "data_size": 63488 00:12:25.432 } 00:12:25.432 ] 00:12:25.432 }' 00:12:25.432 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:25.432 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.001 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:26.001 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:26.001 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:26.001 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:26.001 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:26.001 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:26.001 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:26.001 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:26.260 [2024-07-25 13:12:36.627901] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.260 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:26.260 "name": "Existed_Raid", 00:12:26.260 "aliases": [ 00:12:26.260 "66993b4b-bad6-41e9-8a98-2e6a2814bd4d" 00:12:26.260 ], 00:12:26.260 "product_name": "Raid Volume", 00:12:26.260 "block_size": 512, 00:12:26.260 "num_blocks": 126976, 00:12:26.260 "uuid": "66993b4b-bad6-41e9-8a98-2e6a2814bd4d", 00:12:26.260 "assigned_rate_limits": { 00:12:26.260 "rw_ios_per_sec": 0, 00:12:26.260 "rw_mbytes_per_sec": 0, 00:12:26.260 "r_mbytes_per_sec": 0, 00:12:26.260 "w_mbytes_per_sec": 0 00:12:26.260 }, 00:12:26.260 "claimed": false, 00:12:26.260 "zoned": false, 00:12:26.260 "supported_io_types": { 00:12:26.260 "read": true, 00:12:26.260 "write": true, 00:12:26.260 "unmap": true, 00:12:26.260 "flush": true, 00:12:26.260 "reset": true, 00:12:26.260 "nvme_admin": false, 00:12:26.260 "nvme_io": false, 00:12:26.260 "nvme_io_md": false, 00:12:26.260 "write_zeroes": true, 00:12:26.260 "zcopy": false, 00:12:26.260 "get_zone_info": false, 00:12:26.260 "zone_management": false, 00:12:26.260 "zone_append": false, 00:12:26.260 "compare": false, 00:12:26.260 "compare_and_write": false, 00:12:26.260 "abort": false, 00:12:26.260 "seek_hole": false, 00:12:26.260 "seek_data": false, 00:12:26.260 "copy": false, 00:12:26.260 "nvme_iov_md": false 00:12:26.260 }, 00:12:26.260 "memory_domains": [ 00:12:26.260 { 00:12:26.260 "dma_device_id": "system", 00:12:26.260 "dma_device_type": 1 00:12:26.260 }, 00:12:26.260 { 00:12:26.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.260 "dma_device_type": 2 00:12:26.260 }, 00:12:26.260 { 00:12:26.260 "dma_device_id": "system", 00:12:26.260 "dma_device_type": 1 00:12:26.260 }, 00:12:26.260 { 00:12:26.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.260 "dma_device_type": 2 00:12:26.260 } 00:12:26.260 ], 00:12:26.260 "driver_specific": { 00:12:26.260 "raid": { 00:12:26.260 "uuid": "66993b4b-bad6-41e9-8a98-2e6a2814bd4d", 00:12:26.260 "strip_size_kb": 64, 00:12:26.260 "state": "online", 00:12:26.260 "raid_level": "concat", 00:12:26.260 "superblock": true, 00:12:26.260 "num_base_bdevs": 2, 00:12:26.260 "num_base_bdevs_discovered": 2, 00:12:26.260 "num_base_bdevs_operational": 2, 00:12:26.260 "base_bdevs_list": [ 00:12:26.260 { 00:12:26.260 "name": "BaseBdev1", 00:12:26.260 "uuid": "6fd56feb-526b-4a3a-aebf-1e19a4b3d041", 00:12:26.260 "is_configured": true, 00:12:26.260 "data_offset": 2048, 00:12:26.260 "data_size": 63488 00:12:26.260 }, 00:12:26.260 { 00:12:26.260 "name": "BaseBdev2", 00:12:26.260 "uuid": "231b73bc-866d-4b0d-9ad1-8bea2c69e432", 00:12:26.260 "is_configured": true, 00:12:26.260 "data_offset": 2048, 00:12:26.260 "data_size": 63488 00:12:26.260 } 00:12:26.260 ] 00:12:26.260 } 00:12:26.260 } 00:12:26.260 }' 00:12:26.261 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:26.261 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:26.261 BaseBdev2' 00:12:26.261 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:26.261 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:26.261 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:26.520 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:26.520 "name": "BaseBdev1", 00:12:26.520 "aliases": [ 00:12:26.520 "6fd56feb-526b-4a3a-aebf-1e19a4b3d041" 00:12:26.520 ], 00:12:26.520 "product_name": "Malloc disk", 00:12:26.520 "block_size": 512, 00:12:26.520 "num_blocks": 65536, 00:12:26.520 "uuid": "6fd56feb-526b-4a3a-aebf-1e19a4b3d041", 00:12:26.520 "assigned_rate_limits": { 00:12:26.520 "rw_ios_per_sec": 0, 00:12:26.520 "rw_mbytes_per_sec": 0, 00:12:26.520 "r_mbytes_per_sec": 0, 00:12:26.520 "w_mbytes_per_sec": 0 00:12:26.520 }, 00:12:26.520 "claimed": true, 00:12:26.520 "claim_type": "exclusive_write", 00:12:26.520 "zoned": false, 00:12:26.520 "supported_io_types": { 00:12:26.520 "read": true, 00:12:26.520 "write": true, 00:12:26.520 "unmap": true, 00:12:26.520 "flush": true, 00:12:26.520 "reset": true, 00:12:26.520 "nvme_admin": false, 00:12:26.520 "nvme_io": false, 00:12:26.520 "nvme_io_md": false, 00:12:26.520 "write_zeroes": true, 00:12:26.520 "zcopy": true, 00:12:26.520 "get_zone_info": false, 00:12:26.520 "zone_management": false, 00:12:26.520 "zone_append": false, 00:12:26.520 "compare": false, 00:12:26.520 "compare_and_write": false, 00:12:26.520 "abort": true, 00:12:26.520 "seek_hole": false, 00:12:26.520 "seek_data": false, 00:12:26.520 "copy": true, 00:12:26.520 "nvme_iov_md": false 00:12:26.520 }, 00:12:26.520 "memory_domains": [ 00:12:26.520 { 00:12:26.520 "dma_device_id": "system", 00:12:26.520 "dma_device_type": 1 00:12:26.520 }, 00:12:26.520 { 00:12:26.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.520 "dma_device_type": 2 00:12:26.520 } 00:12:26.520 ], 00:12:26.520 "driver_specific": {} 00:12:26.520 }' 00:12:26.520 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:26.520 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:26.520 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:26.520 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:26.779 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:26.779 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:26.779 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:26.779 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:26.779 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:26.779 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.779 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.779 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:26.779 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:26.779 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:26.779 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:27.039 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:27.039 "name": "BaseBdev2", 00:12:27.039 "aliases": [ 00:12:27.039 "231b73bc-866d-4b0d-9ad1-8bea2c69e432" 00:12:27.039 ], 00:12:27.039 "product_name": "Malloc disk", 00:12:27.039 "block_size": 512, 00:12:27.039 "num_blocks": 65536, 00:12:27.039 "uuid": "231b73bc-866d-4b0d-9ad1-8bea2c69e432", 00:12:27.039 "assigned_rate_limits": { 00:12:27.039 "rw_ios_per_sec": 0, 00:12:27.039 "rw_mbytes_per_sec": 0, 00:12:27.039 "r_mbytes_per_sec": 0, 00:12:27.039 "w_mbytes_per_sec": 0 00:12:27.039 }, 00:12:27.039 "claimed": true, 00:12:27.039 "claim_type": "exclusive_write", 00:12:27.039 "zoned": false, 00:12:27.039 "supported_io_types": { 00:12:27.039 "read": true, 00:12:27.039 "write": true, 00:12:27.039 "unmap": true, 00:12:27.039 "flush": true, 00:12:27.039 "reset": true, 00:12:27.039 "nvme_admin": false, 00:12:27.039 "nvme_io": false, 00:12:27.039 "nvme_io_md": false, 00:12:27.039 "write_zeroes": true, 00:12:27.039 "zcopy": true, 00:12:27.039 "get_zone_info": false, 00:12:27.039 "zone_management": false, 00:12:27.039 "zone_append": false, 00:12:27.039 "compare": false, 00:12:27.039 "compare_and_write": false, 00:12:27.039 "abort": true, 00:12:27.039 "seek_hole": false, 00:12:27.039 "seek_data": false, 00:12:27.039 "copy": true, 00:12:27.039 "nvme_iov_md": false 00:12:27.039 }, 00:12:27.039 "memory_domains": [ 00:12:27.039 { 00:12:27.039 "dma_device_id": "system", 00:12:27.039 "dma_device_type": 1 00:12:27.039 }, 00:12:27.039 { 00:12:27.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.039 "dma_device_type": 2 00:12:27.039 } 00:12:27.039 ], 00:12:27.039 "driver_specific": {} 00:12:27.039 }' 00:12:27.039 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:27.039 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:27.297 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:27.297 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:27.297 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:27.297 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:27.297 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:27.297 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:27.297 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:27.297 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:27.297 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:27.557 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:27.557 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:27.557 [2024-07-25 13:12:37.991278] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:27.557 [2024-07-25 13:12:37.991303] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.557 [2024-07-25 13:12:37.991340] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.557 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.816 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.816 "name": "Existed_Raid", 00:12:27.816 "uuid": "66993b4b-bad6-41e9-8a98-2e6a2814bd4d", 00:12:27.816 "strip_size_kb": 64, 00:12:27.816 "state": "offline", 00:12:27.816 "raid_level": "concat", 00:12:27.816 "superblock": true, 00:12:27.816 "num_base_bdevs": 2, 00:12:27.816 "num_base_bdevs_discovered": 1, 00:12:27.816 "num_base_bdevs_operational": 1, 00:12:27.816 "base_bdevs_list": [ 00:12:27.816 { 00:12:27.816 "name": null, 00:12:27.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.816 "is_configured": false, 00:12:27.816 "data_offset": 2048, 00:12:27.816 "data_size": 63488 00:12:27.816 }, 00:12:27.816 { 00:12:27.816 "name": "BaseBdev2", 00:12:27.816 "uuid": "231b73bc-866d-4b0d-9ad1-8bea2c69e432", 00:12:27.816 "is_configured": true, 00:12:27.816 "data_offset": 2048, 00:12:27.816 "data_size": 63488 00:12:27.816 } 00:12:27.816 ] 00:12:27.816 }' 00:12:27.816 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.816 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.385 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:28.385 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:28.385 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.385 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:28.645 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:28.645 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:28.645 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:28.904 [2024-07-25 13:12:39.263787] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:28.904 [2024-07-25 13:12:39.263840] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1955610 name Existed_Raid, state offline 00:12:28.904 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:28.904 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:28.904 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.904 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 835973 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 835973 ']' 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 835973 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 835973 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 835973' 00:12:29.163 killing process with pid 835973 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 835973 00:12:29.163 [2024-07-25 13:12:39.579146] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.163 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 835973 00:12:29.163 [2024-07-25 13:12:39.580021] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.423 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:29.423 00:12:29.423 real 0m9.472s 00:12:29.423 user 0m17.186s 00:12:29.423 sys 0m1.870s 00:12:29.423 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.423 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.423 ************************************ 00:12:29.423 END TEST raid_state_function_test_sb 00:12:29.423 ************************************ 00:12:29.423 13:12:39 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:12:29.423 13:12:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:29.423 13:12:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.423 13:12:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.423 ************************************ 00:12:29.423 START TEST raid_superblock_test 00:12:29.423 ************************************ 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=837785 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 837785 /var/tmp/spdk-raid.sock 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 837785 ']' 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:29.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.423 13:12:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.683 [2024-07-25 13:12:39.915891] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:12:29.683 [2024-07-25 13:12:39.915947] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837785 ] 00:12:29.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.683 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:29.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.683 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:29.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.683 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:29.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.683 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:29.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.684 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:29.684 [2024-07-25 13:12:40.050440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.684 [2024-07-25 13:12:40.138460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.943 [2024-07-25 13:12:40.196707] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.943 [2024-07-25 13:12:40.196736] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.512 13:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.512 13:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:30.512 13:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:12:30.512 13:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:30.512 13:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:12:30.513 13:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:12:30.513 13:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:30.513 13:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:30.513 13:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:30.513 13:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:30.513 13:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:30.513 malloc1 00:12:30.772 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:30.772 [2024-07-25 13:12:41.212596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:30.772 [2024-07-25 13:12:41.212642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.772 [2024-07-25 13:12:41.212660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13aa2f0 00:12:30.772 [2024-07-25 13:12:41.212671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.772 [2024-07-25 13:12:41.214075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.772 [2024-07-25 13:12:41.214102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:30.772 pt1 00:12:30.772 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:30.772 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:30.772 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:12:30.772 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:12:30.772 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:30.772 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:30.772 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:30.772 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:30.772 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:31.031 malloc2 00:12:31.031 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:31.291 [2024-07-25 13:12:41.670473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:31.291 [2024-07-25 13:12:41.670511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.291 [2024-07-25 13:12:41.670526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1541f70 00:12:31.291 [2024-07-25 13:12:41.670537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.291 [2024-07-25 13:12:41.671852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.291 [2024-07-25 13:12:41.671878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:31.291 pt2 00:12:31.291 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:31.291 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:31.291 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:12:31.551 [2024-07-25 13:12:41.899103] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:31.551 [2024-07-25 13:12:41.900231] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.551 [2024-07-25 13:12:41.900351] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1544760 00:12:31.551 [2024-07-25 13:12:41.900362] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:31.551 [2024-07-25 13:12:41.900539] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15474d0 00:12:31.551 [2024-07-25 13:12:41.900653] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1544760 00:12:31.551 [2024-07-25 13:12:41.900662] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1544760 00:12:31.551 [2024-07-25 13:12:41.900757] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.551 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:31.551 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:31.551 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:31.551 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:31.551 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:31.551 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:31.551 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:31.551 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:31.551 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:31.551 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:31.551 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.551 13:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.810 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:31.810 "name": "raid_bdev1", 00:12:31.810 "uuid": "c1827d4a-07a0-460c-a141-197f53e33834", 00:12:31.810 "strip_size_kb": 64, 00:12:31.810 "state": "online", 00:12:31.810 "raid_level": "concat", 00:12:31.810 "superblock": true, 00:12:31.810 "num_base_bdevs": 2, 00:12:31.810 "num_base_bdevs_discovered": 2, 00:12:31.810 "num_base_bdevs_operational": 2, 00:12:31.810 "base_bdevs_list": [ 00:12:31.810 { 00:12:31.810 "name": "pt1", 00:12:31.810 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:31.810 "is_configured": true, 00:12:31.810 "data_offset": 2048, 00:12:31.810 "data_size": 63488 00:12:31.810 }, 00:12:31.810 { 00:12:31.810 "name": "pt2", 00:12:31.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.810 "is_configured": true, 00:12:31.810 "data_offset": 2048, 00:12:31.810 "data_size": 63488 00:12:31.810 } 00:12:31.810 ] 00:12:31.810 }' 00:12:31.810 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:31.811 13:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.378 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:12:32.378 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:32.378 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:32.378 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:32.378 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:32.378 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:32.378 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:32.378 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:32.638 [2024-07-25 13:12:42.918100] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.638 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:32.638 "name": "raid_bdev1", 00:12:32.638 "aliases": [ 00:12:32.638 "c1827d4a-07a0-460c-a141-197f53e33834" 00:12:32.638 ], 00:12:32.638 "product_name": "Raid Volume", 00:12:32.638 "block_size": 512, 00:12:32.638 "num_blocks": 126976, 00:12:32.638 "uuid": "c1827d4a-07a0-460c-a141-197f53e33834", 00:12:32.638 "assigned_rate_limits": { 00:12:32.638 "rw_ios_per_sec": 0, 00:12:32.638 "rw_mbytes_per_sec": 0, 00:12:32.638 "r_mbytes_per_sec": 0, 00:12:32.638 "w_mbytes_per_sec": 0 00:12:32.638 }, 00:12:32.638 "claimed": false, 00:12:32.638 "zoned": false, 00:12:32.638 "supported_io_types": { 00:12:32.638 "read": true, 00:12:32.638 "write": true, 00:12:32.638 "unmap": true, 00:12:32.638 "flush": true, 00:12:32.638 "reset": true, 00:12:32.638 "nvme_admin": false, 00:12:32.638 "nvme_io": false, 00:12:32.638 "nvme_io_md": false, 00:12:32.638 "write_zeroes": true, 00:12:32.638 "zcopy": false, 00:12:32.638 "get_zone_info": false, 00:12:32.638 "zone_management": false, 00:12:32.638 "zone_append": false, 00:12:32.638 "compare": false, 00:12:32.638 "compare_and_write": false, 00:12:32.638 "abort": false, 00:12:32.638 "seek_hole": false, 00:12:32.638 "seek_data": false, 00:12:32.638 "copy": false, 00:12:32.638 "nvme_iov_md": false 00:12:32.638 }, 00:12:32.638 "memory_domains": [ 00:12:32.638 { 00:12:32.638 "dma_device_id": "system", 00:12:32.638 "dma_device_type": 1 00:12:32.638 }, 00:12:32.638 { 00:12:32.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.638 "dma_device_type": 2 00:12:32.638 }, 00:12:32.638 { 00:12:32.638 "dma_device_id": "system", 00:12:32.638 "dma_device_type": 1 00:12:32.638 }, 00:12:32.638 { 00:12:32.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.638 "dma_device_type": 2 00:12:32.638 } 00:12:32.638 ], 00:12:32.638 "driver_specific": { 00:12:32.638 "raid": { 00:12:32.638 "uuid": "c1827d4a-07a0-460c-a141-197f53e33834", 00:12:32.638 "strip_size_kb": 64, 00:12:32.638 "state": "online", 00:12:32.638 "raid_level": "concat", 00:12:32.638 "superblock": true, 00:12:32.638 "num_base_bdevs": 2, 00:12:32.638 "num_base_bdevs_discovered": 2, 00:12:32.638 "num_base_bdevs_operational": 2, 00:12:32.638 "base_bdevs_list": [ 00:12:32.638 { 00:12:32.638 "name": "pt1", 00:12:32.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.638 "is_configured": true, 00:12:32.638 "data_offset": 2048, 00:12:32.638 "data_size": 63488 00:12:32.638 }, 00:12:32.638 { 00:12:32.638 "name": "pt2", 00:12:32.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.638 "is_configured": true, 00:12:32.638 "data_offset": 2048, 00:12:32.638 "data_size": 63488 00:12:32.638 } 00:12:32.638 ] 00:12:32.638 } 00:12:32.638 } 00:12:32.638 }' 00:12:32.638 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:32.638 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:32.638 pt2' 00:12:32.638 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:32.638 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:32.638 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:32.898 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:32.898 "name": "pt1", 00:12:32.898 "aliases": [ 00:12:32.898 "00000000-0000-0000-0000-000000000001" 00:12:32.898 ], 00:12:32.898 "product_name": "passthru", 00:12:32.898 "block_size": 512, 00:12:32.898 "num_blocks": 65536, 00:12:32.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.898 "assigned_rate_limits": { 00:12:32.898 "rw_ios_per_sec": 0, 00:12:32.898 "rw_mbytes_per_sec": 0, 00:12:32.898 "r_mbytes_per_sec": 0, 00:12:32.898 "w_mbytes_per_sec": 0 00:12:32.898 }, 00:12:32.898 "claimed": true, 00:12:32.898 "claim_type": "exclusive_write", 00:12:32.898 "zoned": false, 00:12:32.898 "supported_io_types": { 00:12:32.898 "read": true, 00:12:32.898 "write": true, 00:12:32.898 "unmap": true, 00:12:32.898 "flush": true, 00:12:32.898 "reset": true, 00:12:32.898 "nvme_admin": false, 00:12:32.898 "nvme_io": false, 00:12:32.898 "nvme_io_md": false, 00:12:32.898 "write_zeroes": true, 00:12:32.898 "zcopy": true, 00:12:32.898 "get_zone_info": false, 00:12:32.898 "zone_management": false, 00:12:32.898 "zone_append": false, 00:12:32.898 "compare": false, 00:12:32.898 "compare_and_write": false, 00:12:32.898 "abort": true, 00:12:32.898 "seek_hole": false, 00:12:32.898 "seek_data": false, 00:12:32.898 "copy": true, 00:12:32.898 "nvme_iov_md": false 00:12:32.898 }, 00:12:32.898 "memory_domains": [ 00:12:32.898 { 00:12:32.898 "dma_device_id": "system", 00:12:32.898 "dma_device_type": 1 00:12:32.898 }, 00:12:32.898 { 00:12:32.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.898 "dma_device_type": 2 00:12:32.898 } 00:12:32.898 ], 00:12:32.898 "driver_specific": { 00:12:32.898 "passthru": { 00:12:32.898 "name": "pt1", 00:12:32.898 "base_bdev_name": "malloc1" 00:12:32.898 } 00:12:32.898 } 00:12:32.898 }' 00:12:32.898 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:32.898 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:32.898 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:32.898 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:32.898 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:33.156 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:33.156 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:33.156 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:33.156 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:33.156 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:33.156 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:33.156 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:33.156 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:33.156 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:33.156 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:33.414 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:33.414 "name": "pt2", 00:12:33.414 "aliases": [ 00:12:33.414 "00000000-0000-0000-0000-000000000002" 00:12:33.414 ], 00:12:33.414 "product_name": "passthru", 00:12:33.414 "block_size": 512, 00:12:33.414 "num_blocks": 65536, 00:12:33.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.414 "assigned_rate_limits": { 00:12:33.414 "rw_ios_per_sec": 0, 00:12:33.414 "rw_mbytes_per_sec": 0, 00:12:33.414 "r_mbytes_per_sec": 0, 00:12:33.414 "w_mbytes_per_sec": 0 00:12:33.414 }, 00:12:33.414 "claimed": true, 00:12:33.414 "claim_type": "exclusive_write", 00:12:33.414 "zoned": false, 00:12:33.414 "supported_io_types": { 00:12:33.414 "read": true, 00:12:33.414 "write": true, 00:12:33.414 "unmap": true, 00:12:33.414 "flush": true, 00:12:33.414 "reset": true, 00:12:33.414 "nvme_admin": false, 00:12:33.414 "nvme_io": false, 00:12:33.414 "nvme_io_md": false, 00:12:33.414 "write_zeroes": true, 00:12:33.414 "zcopy": true, 00:12:33.415 "get_zone_info": false, 00:12:33.415 "zone_management": false, 00:12:33.415 "zone_append": false, 00:12:33.415 "compare": false, 00:12:33.415 "compare_and_write": false, 00:12:33.415 "abort": true, 00:12:33.415 "seek_hole": false, 00:12:33.415 "seek_data": false, 00:12:33.415 "copy": true, 00:12:33.415 "nvme_iov_md": false 00:12:33.415 }, 00:12:33.415 "memory_domains": [ 00:12:33.415 { 00:12:33.415 "dma_device_id": "system", 00:12:33.415 "dma_device_type": 1 00:12:33.415 }, 00:12:33.415 { 00:12:33.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.415 "dma_device_type": 2 00:12:33.415 } 00:12:33.415 ], 00:12:33.415 "driver_specific": { 00:12:33.415 "passthru": { 00:12:33.415 "name": "pt2", 00:12:33.415 "base_bdev_name": "malloc2" 00:12:33.415 } 00:12:33.415 } 00:12:33.415 }' 00:12:33.415 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:33.415 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:33.415 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:33.415 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:33.673 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:33.673 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:33.673 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:33.673 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:33.673 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:33.673 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:33.674 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:33.674 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:33.674 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:33.674 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:12:33.933 [2024-07-25 13:12:44.321770] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.933 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=c1827d4a-07a0-460c-a141-197f53e33834 00:12:33.933 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z c1827d4a-07a0-460c-a141-197f53e33834 ']' 00:12:33.933 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:34.193 [2024-07-25 13:12:44.550132] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.193 [2024-07-25 13:12:44.550153] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.193 [2024-07-25 13:12:44.550206] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.193 [2024-07-25 13:12:44.550246] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.193 [2024-07-25 13:12:44.550256] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1544760 name raid_bdev1, state offline 00:12:34.193 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.193 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:12:34.452 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:12:34.452 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:12:34.452 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:34.452 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:34.711 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:34.711 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:34.970 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:34.970 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:35.229 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:12:35.229 [2024-07-25 13:12:45.705124] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:35.229 [2024-07-25 13:12:45.706395] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:35.229 [2024-07-25 13:12:45.706448] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:35.229 [2024-07-25 13:12:45.706487] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:35.229 [2024-07-25 13:12:45.706505] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.229 [2024-07-25 13:12:45.706514] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15449f0 name raid_bdev1, state configuring 00:12:35.229 request: 00:12:35.229 { 00:12:35.229 "name": "raid_bdev1", 00:12:35.229 "raid_level": "concat", 00:12:35.229 "base_bdevs": [ 00:12:35.229 "malloc1", 00:12:35.229 "malloc2" 00:12:35.229 ], 00:12:35.229 "strip_size_kb": 64, 00:12:35.229 "superblock": false, 00:12:35.229 "method": "bdev_raid_create", 00:12:35.229 "req_id": 1 00:12:35.229 } 00:12:35.229 Got JSON-RPC error response 00:12:35.229 response: 00:12:35.229 { 00:12:35.229 "code": -17, 00:12:35.229 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:35.229 } 00:12:35.488 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:35.488 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.488 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.488 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.488 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.488 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:12:35.488 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:12:35.488 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:12:35.488 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:35.748 [2024-07-25 13:12:46.158268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:35.748 [2024-07-25 13:12:46.158313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.748 [2024-07-25 13:12:46.158329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x154dbf0 00:12:35.748 [2024-07-25 13:12:46.158341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.748 [2024-07-25 13:12:46.159823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.748 [2024-07-25 13:12:46.159852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:35.748 [2024-07-25 13:12:46.159915] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:35.748 [2024-07-25 13:12:46.159939] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:35.748 pt1 00:12:35.748 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:12:35.748 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:35.748 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:35.748 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:35.748 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:35.748 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:35.748 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.748 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.748 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.748 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.748 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.748 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.008 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:36.008 "name": "raid_bdev1", 00:12:36.008 "uuid": "c1827d4a-07a0-460c-a141-197f53e33834", 00:12:36.008 "strip_size_kb": 64, 00:12:36.008 "state": "configuring", 00:12:36.008 "raid_level": "concat", 00:12:36.008 "superblock": true, 00:12:36.008 "num_base_bdevs": 2, 00:12:36.008 "num_base_bdevs_discovered": 1, 00:12:36.008 "num_base_bdevs_operational": 2, 00:12:36.008 "base_bdevs_list": [ 00:12:36.008 { 00:12:36.008 "name": "pt1", 00:12:36.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:36.008 "is_configured": true, 00:12:36.008 "data_offset": 2048, 00:12:36.008 "data_size": 63488 00:12:36.008 }, 00:12:36.008 { 00:12:36.008 "name": null, 00:12:36.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:36.008 "is_configured": false, 00:12:36.008 "data_offset": 2048, 00:12:36.008 "data_size": 63488 00:12:36.008 } 00:12:36.008 ] 00:12:36.008 }' 00:12:36.008 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:36.008 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.575 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:12:36.575 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:12:36.575 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:36.575 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:36.834 [2024-07-25 13:12:47.164929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:36.834 [2024-07-25 13:12:47.164978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.834 [2024-07-25 13:12:47.164995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13aa520 00:12:36.834 [2024-07-25 13:12:47.165007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.834 [2024-07-25 13:12:47.165335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.834 [2024-07-25 13:12:47.165354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:36.834 [2024-07-25 13:12:47.165411] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:36.834 [2024-07-25 13:12:47.165428] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:36.834 [2024-07-25 13:12:47.165517] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1543160 00:12:36.834 [2024-07-25 13:12:47.165527] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:36.834 [2024-07-25 13:12:47.165686] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x154e6d0 00:12:36.834 [2024-07-25 13:12:47.165796] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1543160 00:12:36.834 [2024-07-25 13:12:47.165805] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1543160 00:12:36.834 [2024-07-25 13:12:47.165892] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.834 pt2 00:12:36.834 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:12:36.834 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:36.834 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:36.835 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:36.835 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:36.835 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:36.835 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:36.835 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:36.835 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.835 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.835 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.835 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.835 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.835 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.094 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:37.094 "name": "raid_bdev1", 00:12:37.094 "uuid": "c1827d4a-07a0-460c-a141-197f53e33834", 00:12:37.094 "strip_size_kb": 64, 00:12:37.094 "state": "online", 00:12:37.094 "raid_level": "concat", 00:12:37.094 "superblock": true, 00:12:37.094 "num_base_bdevs": 2, 00:12:37.094 "num_base_bdevs_discovered": 2, 00:12:37.094 "num_base_bdevs_operational": 2, 00:12:37.094 "base_bdevs_list": [ 00:12:37.094 { 00:12:37.094 "name": "pt1", 00:12:37.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:37.094 "is_configured": true, 00:12:37.094 "data_offset": 2048, 00:12:37.094 "data_size": 63488 00:12:37.094 }, 00:12:37.094 { 00:12:37.094 "name": "pt2", 00:12:37.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.094 "is_configured": true, 00:12:37.094 "data_offset": 2048, 00:12:37.094 "data_size": 63488 00:12:37.094 } 00:12:37.094 ] 00:12:37.094 }' 00:12:37.094 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:37.094 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.662 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:12:37.662 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:37.662 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:37.662 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:37.662 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:37.662 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:37.662 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:37.662 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:37.662 [2024-07-25 13:12:48.119668] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.662 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:37.662 "name": "raid_bdev1", 00:12:37.662 "aliases": [ 00:12:37.662 "c1827d4a-07a0-460c-a141-197f53e33834" 00:12:37.662 ], 00:12:37.662 "product_name": "Raid Volume", 00:12:37.662 "block_size": 512, 00:12:37.662 "num_blocks": 126976, 00:12:37.662 "uuid": "c1827d4a-07a0-460c-a141-197f53e33834", 00:12:37.662 "assigned_rate_limits": { 00:12:37.662 "rw_ios_per_sec": 0, 00:12:37.662 "rw_mbytes_per_sec": 0, 00:12:37.662 "r_mbytes_per_sec": 0, 00:12:37.662 "w_mbytes_per_sec": 0 00:12:37.662 }, 00:12:37.662 "claimed": false, 00:12:37.662 "zoned": false, 00:12:37.662 "supported_io_types": { 00:12:37.662 "read": true, 00:12:37.662 "write": true, 00:12:37.662 "unmap": true, 00:12:37.662 "flush": true, 00:12:37.662 "reset": true, 00:12:37.662 "nvme_admin": false, 00:12:37.662 "nvme_io": false, 00:12:37.662 "nvme_io_md": false, 00:12:37.662 "write_zeroes": true, 00:12:37.662 "zcopy": false, 00:12:37.662 "get_zone_info": false, 00:12:37.662 "zone_management": false, 00:12:37.662 "zone_append": false, 00:12:37.662 "compare": false, 00:12:37.662 "compare_and_write": false, 00:12:37.662 "abort": false, 00:12:37.662 "seek_hole": false, 00:12:37.662 "seek_data": false, 00:12:37.662 "copy": false, 00:12:37.662 "nvme_iov_md": false 00:12:37.662 }, 00:12:37.662 "memory_domains": [ 00:12:37.662 { 00:12:37.662 "dma_device_id": "system", 00:12:37.662 "dma_device_type": 1 00:12:37.662 }, 00:12:37.662 { 00:12:37.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.662 "dma_device_type": 2 00:12:37.662 }, 00:12:37.662 { 00:12:37.662 "dma_device_id": "system", 00:12:37.662 "dma_device_type": 1 00:12:37.662 }, 00:12:37.662 { 00:12:37.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.662 "dma_device_type": 2 00:12:37.662 } 00:12:37.662 ], 00:12:37.662 "driver_specific": { 00:12:37.662 "raid": { 00:12:37.662 "uuid": "c1827d4a-07a0-460c-a141-197f53e33834", 00:12:37.662 "strip_size_kb": 64, 00:12:37.662 "state": "online", 00:12:37.662 "raid_level": "concat", 00:12:37.662 "superblock": true, 00:12:37.662 "num_base_bdevs": 2, 00:12:37.662 "num_base_bdevs_discovered": 2, 00:12:37.662 "num_base_bdevs_operational": 2, 00:12:37.662 "base_bdevs_list": [ 00:12:37.662 { 00:12:37.662 "name": "pt1", 00:12:37.662 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:37.662 "is_configured": true, 00:12:37.662 "data_offset": 2048, 00:12:37.662 "data_size": 63488 00:12:37.662 }, 00:12:37.662 { 00:12:37.662 "name": "pt2", 00:12:37.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.662 "is_configured": true, 00:12:37.662 "data_offset": 2048, 00:12:37.662 "data_size": 63488 00:12:37.662 } 00:12:37.662 ] 00:12:37.662 } 00:12:37.662 } 00:12:37.662 }' 00:12:37.662 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.921 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:37.921 pt2' 00:12:37.921 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:37.921 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:37.921 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:38.180 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:38.180 "name": "pt1", 00:12:38.180 "aliases": [ 00:12:38.180 "00000000-0000-0000-0000-000000000001" 00:12:38.180 ], 00:12:38.180 "product_name": "passthru", 00:12:38.180 "block_size": 512, 00:12:38.180 "num_blocks": 65536, 00:12:38.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:38.180 "assigned_rate_limits": { 00:12:38.180 "rw_ios_per_sec": 0, 00:12:38.180 "rw_mbytes_per_sec": 0, 00:12:38.180 "r_mbytes_per_sec": 0, 00:12:38.180 "w_mbytes_per_sec": 0 00:12:38.180 }, 00:12:38.180 "claimed": true, 00:12:38.180 "claim_type": "exclusive_write", 00:12:38.180 "zoned": false, 00:12:38.180 "supported_io_types": { 00:12:38.180 "read": true, 00:12:38.180 "write": true, 00:12:38.180 "unmap": true, 00:12:38.180 "flush": true, 00:12:38.180 "reset": true, 00:12:38.180 "nvme_admin": false, 00:12:38.180 "nvme_io": false, 00:12:38.180 "nvme_io_md": false, 00:12:38.180 "write_zeroes": true, 00:12:38.180 "zcopy": true, 00:12:38.180 "get_zone_info": false, 00:12:38.180 "zone_management": false, 00:12:38.180 "zone_append": false, 00:12:38.180 "compare": false, 00:12:38.180 "compare_and_write": false, 00:12:38.180 "abort": true, 00:12:38.180 "seek_hole": false, 00:12:38.180 "seek_data": false, 00:12:38.180 "copy": true, 00:12:38.180 "nvme_iov_md": false 00:12:38.180 }, 00:12:38.180 "memory_domains": [ 00:12:38.180 { 00:12:38.180 "dma_device_id": "system", 00:12:38.180 "dma_device_type": 1 00:12:38.180 }, 00:12:38.180 { 00:12:38.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.180 "dma_device_type": 2 00:12:38.180 } 00:12:38.180 ], 00:12:38.180 "driver_specific": { 00:12:38.180 "passthru": { 00:12:38.180 "name": "pt1", 00:12:38.180 "base_bdev_name": "malloc1" 00:12:38.180 } 00:12:38.180 } 00:12:38.180 }' 00:12:38.180 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.180 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.180 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:38.180 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.180 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.180 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:38.180 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.180 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.180 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:38.180 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.439 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.439 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:38.439 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:38.439 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:38.439 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:38.699 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:38.699 "name": "pt2", 00:12:38.699 "aliases": [ 00:12:38.699 "00000000-0000-0000-0000-000000000002" 00:12:38.699 ], 00:12:38.699 "product_name": "passthru", 00:12:38.699 "block_size": 512, 00:12:38.699 "num_blocks": 65536, 00:12:38.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.699 "assigned_rate_limits": { 00:12:38.699 "rw_ios_per_sec": 0, 00:12:38.699 "rw_mbytes_per_sec": 0, 00:12:38.699 "r_mbytes_per_sec": 0, 00:12:38.699 "w_mbytes_per_sec": 0 00:12:38.699 }, 00:12:38.699 "claimed": true, 00:12:38.699 "claim_type": "exclusive_write", 00:12:38.699 "zoned": false, 00:12:38.699 "supported_io_types": { 00:12:38.699 "read": true, 00:12:38.699 "write": true, 00:12:38.699 "unmap": true, 00:12:38.699 "flush": true, 00:12:38.699 "reset": true, 00:12:38.699 "nvme_admin": false, 00:12:38.699 "nvme_io": false, 00:12:38.699 "nvme_io_md": false, 00:12:38.699 "write_zeroes": true, 00:12:38.699 "zcopy": true, 00:12:38.699 "get_zone_info": false, 00:12:38.699 "zone_management": false, 00:12:38.699 "zone_append": false, 00:12:38.699 "compare": false, 00:12:38.699 "compare_and_write": false, 00:12:38.699 "abort": true, 00:12:38.699 "seek_hole": false, 00:12:38.699 "seek_data": false, 00:12:38.699 "copy": true, 00:12:38.699 "nvme_iov_md": false 00:12:38.699 }, 00:12:38.699 "memory_domains": [ 00:12:38.699 { 00:12:38.699 "dma_device_id": "system", 00:12:38.699 "dma_device_type": 1 00:12:38.699 }, 00:12:38.699 { 00:12:38.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.699 "dma_device_type": 2 00:12:38.699 } 00:12:38.699 ], 00:12:38.699 "driver_specific": { 00:12:38.699 "passthru": { 00:12:38.699 "name": "pt2", 00:12:38.699 "base_bdev_name": "malloc2" 00:12:38.699 } 00:12:38.699 } 00:12:38.699 }' 00:12:38.699 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.699 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.699 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:38.699 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.699 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.699 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:38.699 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.958 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.958 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:38.958 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.958 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.958 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:38.958 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:12:38.958 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:39.218 [2024-07-25 13:12:49.531371] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' c1827d4a-07a0-460c-a141-197f53e33834 '!=' c1827d4a-07a0-460c-a141-197f53e33834 ']' 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 837785 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 837785 ']' 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 837785 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 837785 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 837785' 00:12:39.218 killing process with pid 837785 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 837785 00:12:39.218 [2024-07-25 13:12:49.610648] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.218 [2024-07-25 13:12:49.610704] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.218 [2024-07-25 13:12:49.610745] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.218 [2024-07-25 13:12:49.610755] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1543160 name raid_bdev1, state offline 00:12:39.218 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 837785 00:12:39.218 [2024-07-25 13:12:49.626500] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.479 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:12:39.479 00:12:39.479 real 0m9.963s 00:12:39.479 user 0m17.717s 00:12:39.479 sys 0m1.909s 00:12:39.479 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.479 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.479 ************************************ 00:12:39.479 END TEST raid_superblock_test 00:12:39.479 ************************************ 00:12:39.479 13:12:49 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:12:39.479 13:12:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:39.479 13:12:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.479 13:12:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.479 ************************************ 00:12:39.479 START TEST raid_read_error_test 00:12:39.479 ************************************ 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.YqkPodoDoT 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=839636 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 839636 /var/tmp/spdk-raid.sock 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:39.479 13:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 839636 ']' 00:12:39.480 13:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:39.480 13:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:39.480 13:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:39.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:39.480 13:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:39.480 13:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.739 [2024-07-25 13:12:49.974380] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:12:39.739 [2024-07-25 13:12:49.974438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839636 ] 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:39.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.739 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:39.739 [2024-07-25 13:12:50.109325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.739 [2024-07-25 13:12:50.196595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.999 [2024-07-25 13:12:50.258965] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.999 [2024-07-25 13:12:50.258998] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.606 13:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:40.606 13:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:40.606 13:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:40.606 13:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:40.606 BaseBdev1_malloc 00:12:40.865 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:40.865 true 00:12:40.865 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:41.124 [2024-07-25 13:12:51.535993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:41.124 [2024-07-25 13:12:51.536033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.124 [2024-07-25 13:12:51.536052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16211d0 00:12:41.124 [2024-07-25 13:12:51.536064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.124 [2024-07-25 13:12:51.537647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.124 [2024-07-25 13:12:51.537676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:41.124 BaseBdev1 00:12:41.124 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:41.124 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:41.386 BaseBdev2_malloc 00:12:41.386 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:41.644 true 00:12:41.644 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:41.903 [2024-07-25 13:12:52.209980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:41.903 [2024-07-25 13:12:52.210020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.903 [2024-07-25 13:12:52.210038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1624710 00:12:41.903 [2024-07-25 13:12:52.210050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.903 [2024-07-25 13:12:52.211445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.903 [2024-07-25 13:12:52.211472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:41.903 BaseBdev2 00:12:41.903 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:42.163 [2024-07-25 13:12:52.438610] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.163 [2024-07-25 13:12:52.439770] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.163 [2024-07-25 13:12:52.439928] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1627bc0 00:12:42.163 [2024-07-25 13:12:52.439940] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:42.163 [2024-07-25 13:12:52.440117] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x162a870 00:12:42.163 [2024-07-25 13:12:52.440254] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1627bc0 00:12:42.163 [2024-07-25 13:12:52.440264] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1627bc0 00:12:42.163 [2024-07-25 13:12:52.440370] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.163 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:42.163 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:42.163 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:42.163 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:42.163 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:42.163 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:42.163 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:42.163 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:42.163 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:42.163 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:42.163 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.163 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.422 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.422 "name": "raid_bdev1", 00:12:42.422 "uuid": "2a18aeed-aa29-486d-8a66-819d6f863c2f", 00:12:42.422 "strip_size_kb": 64, 00:12:42.422 "state": "online", 00:12:42.422 "raid_level": "concat", 00:12:42.422 "superblock": true, 00:12:42.422 "num_base_bdevs": 2, 00:12:42.422 "num_base_bdevs_discovered": 2, 00:12:42.422 "num_base_bdevs_operational": 2, 00:12:42.422 "base_bdevs_list": [ 00:12:42.422 { 00:12:42.422 "name": "BaseBdev1", 00:12:42.422 "uuid": "8cbeeeb3-d709-58ac-9cd0-9e55e7dc535c", 00:12:42.422 "is_configured": true, 00:12:42.422 "data_offset": 2048, 00:12:42.422 "data_size": 63488 00:12:42.422 }, 00:12:42.422 { 00:12:42.422 "name": "BaseBdev2", 00:12:42.422 "uuid": "9cfb111e-e7dd-50ff-97de-5b59a06f2429", 00:12:42.422 "is_configured": true, 00:12:42.422 "data_offset": 2048, 00:12:42.422 "data_size": 63488 00:12:42.422 } 00:12:42.422 ] 00:12:42.422 }' 00:12:42.422 13:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.422 13:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.003 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:12:43.003 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:43.003 [2024-07-25 13:12:53.365310] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1628a10 00:12:43.944 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.203 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.462 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:44.462 "name": "raid_bdev1", 00:12:44.462 "uuid": "2a18aeed-aa29-486d-8a66-819d6f863c2f", 00:12:44.462 "strip_size_kb": 64, 00:12:44.462 "state": "online", 00:12:44.462 "raid_level": "concat", 00:12:44.462 "superblock": true, 00:12:44.462 "num_base_bdevs": 2, 00:12:44.462 "num_base_bdevs_discovered": 2, 00:12:44.462 "num_base_bdevs_operational": 2, 00:12:44.462 "base_bdevs_list": [ 00:12:44.462 { 00:12:44.462 "name": "BaseBdev1", 00:12:44.462 "uuid": "8cbeeeb3-d709-58ac-9cd0-9e55e7dc535c", 00:12:44.462 "is_configured": true, 00:12:44.462 "data_offset": 2048, 00:12:44.462 "data_size": 63488 00:12:44.462 }, 00:12:44.462 { 00:12:44.462 "name": "BaseBdev2", 00:12:44.462 "uuid": "9cfb111e-e7dd-50ff-97de-5b59a06f2429", 00:12:44.462 "is_configured": true, 00:12:44.462 "data_offset": 2048, 00:12:44.462 "data_size": 63488 00:12:44.462 } 00:12:44.462 ] 00:12:44.462 }' 00:12:44.462 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:44.462 13:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.029 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:45.029 [2024-07-25 13:12:55.466573] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.029 [2024-07-25 13:12:55.466611] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.029 [2024-07-25 13:12:55.469587] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.029 [2024-07-25 13:12:55.469616] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.029 [2024-07-25 13:12:55.469639] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.029 [2024-07-25 13:12:55.469649] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1627bc0 name raid_bdev1, state offline 00:12:45.029 0 00:12:45.029 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 839636 00:12:45.029 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 839636 ']' 00:12:45.029 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 839636 00:12:45.029 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:45.029 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.029 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 839636 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 839636' 00:12:45.289 killing process with pid 839636 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 839636 00:12:45.289 [2024-07-25 13:12:55.545521] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 839636 00:12:45.289 [2024-07-25 13:12:55.554839] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.YqkPodoDoT 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.48 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.48 != \0\.\0\0 ]] 00:12:45.289 00:12:45.289 real 0m5.856s 00:12:45.289 user 0m9.088s 00:12:45.289 sys 0m1.031s 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.289 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.289 ************************************ 00:12:45.289 END TEST raid_read_error_test 00:12:45.289 ************************************ 00:12:45.549 13:12:55 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:12:45.549 13:12:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:45.549 13:12:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.549 13:12:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.549 ************************************ 00:12:45.549 START TEST raid_write_error_test 00:12:45.549 ************************************ 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.YJkAtMdfOd 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=840762 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 840762 /var/tmp/spdk-raid.sock 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 840762 ']' 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:45.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.549 13:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.549 [2024-07-25 13:12:55.919503] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:12:45.549 [2024-07-25 13:12:55.919563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid840762 ] 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:45.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.549 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:45.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.550 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:45.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.550 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:45.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.550 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:45.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.550 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:45.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.550 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:45.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.550 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:45.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.550 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:45.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.550 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:45.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.550 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:45.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.550 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:45.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.550 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:45.808 [2024-07-25 13:12:56.052152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.808 [2024-07-25 13:12:56.136769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.808 [2024-07-25 13:12:56.199574] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.808 [2024-07-25 13:12:56.199609] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.376 13:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:46.376 13:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:46.376 13:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:46.376 13:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:46.634 BaseBdev1_malloc 00:12:46.634 13:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:46.893 true 00:12:46.893 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:47.152 [2024-07-25 13:12:57.400393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:47.152 [2024-07-25 13:12:57.400438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.152 [2024-07-25 13:12:57.400454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13f01d0 00:12:47.152 [2024-07-25 13:12:57.400466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.152 [2024-07-25 13:12:57.401912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.152 [2024-07-25 13:12:57.401938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:47.152 BaseBdev1 00:12:47.152 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:47.152 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:47.152 BaseBdev2_malloc 00:12:47.409 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:47.409 true 00:12:47.409 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:47.667 [2024-07-25 13:12:58.014181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:47.667 [2024-07-25 13:12:58.014227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.667 [2024-07-25 13:12:58.014245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13f3710 00:12:47.667 [2024-07-25 13:12:58.014256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.667 [2024-07-25 13:12:58.015543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.667 [2024-07-25 13:12:58.015569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:47.667 BaseBdev2 00:12:47.667 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:47.925 [2024-07-25 13:12:58.242800] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.925 [2024-07-25 13:12:58.243920] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.925 [2024-07-25 13:12:58.244076] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x13f6bc0 00:12:47.925 [2024-07-25 13:12:58.244088] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:47.925 [2024-07-25 13:12:58.244268] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13f9870 00:12:47.925 [2024-07-25 13:12:58.244398] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13f6bc0 00:12:47.925 [2024-07-25 13:12:58.244407] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13f6bc0 00:12:47.925 [2024-07-25 13:12:58.244507] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.925 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:47.925 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:47.925 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:47.925 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:47.925 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:47.925 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:47.925 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.925 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.925 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.925 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.925 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.925 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.183 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.183 "name": "raid_bdev1", 00:12:48.183 "uuid": "7becf1df-590c-443a-826a-d0dbd95b09fa", 00:12:48.183 "strip_size_kb": 64, 00:12:48.183 "state": "online", 00:12:48.183 "raid_level": "concat", 00:12:48.183 "superblock": true, 00:12:48.183 "num_base_bdevs": 2, 00:12:48.183 "num_base_bdevs_discovered": 2, 00:12:48.183 "num_base_bdevs_operational": 2, 00:12:48.183 "base_bdevs_list": [ 00:12:48.184 { 00:12:48.184 "name": "BaseBdev1", 00:12:48.184 "uuid": "08a3dc52-39b7-5eef-93d0-93bbcca2d1f8", 00:12:48.184 "is_configured": true, 00:12:48.184 "data_offset": 2048, 00:12:48.184 "data_size": 63488 00:12:48.184 }, 00:12:48.184 { 00:12:48.184 "name": "BaseBdev2", 00:12:48.184 "uuid": "34353e30-3825-592c-94c4-eb5f39de6da9", 00:12:48.184 "is_configured": true, 00:12:48.184 "data_offset": 2048, 00:12:48.184 "data_size": 63488 00:12:48.184 } 00:12:48.184 ] 00:12:48.184 }' 00:12:48.184 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.184 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.750 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:12:48.750 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:48.750 [2024-07-25 13:12:59.153441] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13f7a10 00:12:49.685 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.944 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.203 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:50.203 "name": "raid_bdev1", 00:12:50.203 "uuid": "7becf1df-590c-443a-826a-d0dbd95b09fa", 00:12:50.203 "strip_size_kb": 64, 00:12:50.203 "state": "online", 00:12:50.203 "raid_level": "concat", 00:12:50.203 "superblock": true, 00:12:50.203 "num_base_bdevs": 2, 00:12:50.203 "num_base_bdevs_discovered": 2, 00:12:50.203 "num_base_bdevs_operational": 2, 00:12:50.203 "base_bdevs_list": [ 00:12:50.203 { 00:12:50.203 "name": "BaseBdev1", 00:12:50.203 "uuid": "08a3dc52-39b7-5eef-93d0-93bbcca2d1f8", 00:12:50.203 "is_configured": true, 00:12:50.203 "data_offset": 2048, 00:12:50.203 "data_size": 63488 00:12:50.203 }, 00:12:50.203 { 00:12:50.203 "name": "BaseBdev2", 00:12:50.203 "uuid": "34353e30-3825-592c-94c4-eb5f39de6da9", 00:12:50.203 "is_configured": true, 00:12:50.203 "data_offset": 2048, 00:12:50.203 "data_size": 63488 00:12:50.203 } 00:12:50.203 ] 00:12:50.203 }' 00:12:50.203 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:50.203 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.768 13:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:51.026 [2024-07-25 13:13:01.307479] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.026 [2024-07-25 13:13:01.307514] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.026 [2024-07-25 13:13:01.310448] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.026 [2024-07-25 13:13:01.310478] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.026 [2024-07-25 13:13:01.310501] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.026 [2024-07-25 13:13:01.310511] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13f6bc0 name raid_bdev1, state offline 00:12:51.026 0 00:12:51.026 13:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 840762 00:12:51.026 13:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 840762 ']' 00:12:51.026 13:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 840762 00:12:51.026 13:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:51.026 13:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:51.026 13:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 840762 00:12:51.026 13:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:51.026 13:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:51.026 13:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 840762' 00:12:51.026 killing process with pid 840762 00:12:51.026 13:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 840762 00:12:51.026 [2024-07-25 13:13:01.380455] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.026 13:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 840762 00:12:51.026 [2024-07-25 13:13:01.390404] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:51.286 13:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.YJkAtMdfOd 00:12:51.286 13:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:12:51.286 13:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:12:51.286 13:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:12:51.286 13:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:12:51.286 13:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:51.286 13:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:51.286 13:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:12:51.286 00:12:51.286 real 0m5.758s 00:12:51.286 user 0m8.958s 00:12:51.286 sys 0m0.962s 00:12:51.286 13:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.286 13:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.286 ************************************ 00:12:51.286 END TEST raid_write_error_test 00:12:51.286 ************************************ 00:12:51.286 13:13:01 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:12:51.286 13:13:01 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:12:51.286 13:13:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:51.286 13:13:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.286 13:13:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:51.286 ************************************ 00:12:51.286 START TEST raid_state_function_test 00:12:51.286 ************************************ 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=841913 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 841913' 00:12:51.286 Process raid pid: 841913 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 841913 /var/tmp/spdk-raid.sock 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 841913 ']' 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:51.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.286 13:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.286 [2024-07-25 13:13:01.751398] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:12:51.286 [2024-07-25 13:13:01.751458] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:51.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.545 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:51.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.546 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:51.546 [2024-07-25 13:13:01.885270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.546 [2024-07-25 13:13:01.965920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.546 [2024-07-25 13:13:02.027530] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.546 [2024-07-25 13:13:02.027566] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:52.481 [2024-07-25 13:13:02.858582] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.481 [2024-07-25 13:13:02.858626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.481 [2024-07-25 13:13:02.858636] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.481 [2024-07-25 13:13:02.858647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.481 13:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.740 13:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.740 "name": "Existed_Raid", 00:12:52.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.740 "strip_size_kb": 0, 00:12:52.740 "state": "configuring", 00:12:52.740 "raid_level": "raid1", 00:12:52.740 "superblock": false, 00:12:52.740 "num_base_bdevs": 2, 00:12:52.740 "num_base_bdevs_discovered": 0, 00:12:52.740 "num_base_bdevs_operational": 2, 00:12:52.740 "base_bdevs_list": [ 00:12:52.740 { 00:12:52.740 "name": "BaseBdev1", 00:12:52.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.740 "is_configured": false, 00:12:52.740 "data_offset": 0, 00:12:52.740 "data_size": 0 00:12:52.740 }, 00:12:52.740 { 00:12:52.740 "name": "BaseBdev2", 00:12:52.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.740 "is_configured": false, 00:12:52.740 "data_offset": 0, 00:12:52.740 "data_size": 0 00:12:52.740 } 00:12:52.740 ] 00:12:52.740 }' 00:12:52.740 13:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.740 13:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.315 13:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:53.603 [2024-07-25 13:13:03.821123] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:53.603 [2024-07-25 13:13:03.821163] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19f0f20 name Existed_Raid, state configuring 00:12:53.603 13:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:53.603 [2024-07-25 13:13:04.017643] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:53.603 [2024-07-25 13:13:04.017680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:53.603 [2024-07-25 13:13:04.017689] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.603 [2024-07-25 13:13:04.017700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:53.604 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:53.863 [2024-07-25 13:13:04.203469] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.863 BaseBdev1 00:12:53.863 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:53.863 13:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:53.863 13:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:53.863 13:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:53.863 13:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:53.863 13:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:53.863 13:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:54.122 13:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:54.122 [ 00:12:54.122 { 00:12:54.122 "name": "BaseBdev1", 00:12:54.122 "aliases": [ 00:12:54.122 "f831e37a-5c2b-4e0d-84c9-59fac0072d99" 00:12:54.122 ], 00:12:54.122 "product_name": "Malloc disk", 00:12:54.122 "block_size": 512, 00:12:54.122 "num_blocks": 65536, 00:12:54.122 "uuid": "f831e37a-5c2b-4e0d-84c9-59fac0072d99", 00:12:54.122 "assigned_rate_limits": { 00:12:54.122 "rw_ios_per_sec": 0, 00:12:54.122 "rw_mbytes_per_sec": 0, 00:12:54.122 "r_mbytes_per_sec": 0, 00:12:54.122 "w_mbytes_per_sec": 0 00:12:54.122 }, 00:12:54.122 "claimed": true, 00:12:54.122 "claim_type": "exclusive_write", 00:12:54.122 "zoned": false, 00:12:54.122 "supported_io_types": { 00:12:54.122 "read": true, 00:12:54.122 "write": true, 00:12:54.122 "unmap": true, 00:12:54.122 "flush": true, 00:12:54.122 "reset": true, 00:12:54.122 "nvme_admin": false, 00:12:54.122 "nvme_io": false, 00:12:54.122 "nvme_io_md": false, 00:12:54.122 "write_zeroes": true, 00:12:54.122 "zcopy": true, 00:12:54.122 "get_zone_info": false, 00:12:54.122 "zone_management": false, 00:12:54.122 "zone_append": false, 00:12:54.122 "compare": false, 00:12:54.122 "compare_and_write": false, 00:12:54.122 "abort": true, 00:12:54.122 "seek_hole": false, 00:12:54.122 "seek_data": false, 00:12:54.122 "copy": true, 00:12:54.122 "nvme_iov_md": false 00:12:54.122 }, 00:12:54.122 "memory_domains": [ 00:12:54.122 { 00:12:54.122 "dma_device_id": "system", 00:12:54.122 "dma_device_type": 1 00:12:54.122 }, 00:12:54.122 { 00:12:54.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.122 "dma_device_type": 2 00:12:54.122 } 00:12:54.122 ], 00:12:54.122 "driver_specific": {} 00:12:54.122 } 00:12:54.122 ] 00:12:54.122 13:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:54.122 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:54.122 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:54.122 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:54.122 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:54.122 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:54.122 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:54.122 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.122 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.123 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.123 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.123 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.123 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.382 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.382 "name": "Existed_Raid", 00:12:54.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.382 "strip_size_kb": 0, 00:12:54.382 "state": "configuring", 00:12:54.382 "raid_level": "raid1", 00:12:54.382 "superblock": false, 00:12:54.382 "num_base_bdevs": 2, 00:12:54.382 "num_base_bdevs_discovered": 1, 00:12:54.382 "num_base_bdevs_operational": 2, 00:12:54.382 "base_bdevs_list": [ 00:12:54.382 { 00:12:54.382 "name": "BaseBdev1", 00:12:54.382 "uuid": "f831e37a-5c2b-4e0d-84c9-59fac0072d99", 00:12:54.382 "is_configured": true, 00:12:54.382 "data_offset": 0, 00:12:54.382 "data_size": 65536 00:12:54.382 }, 00:12:54.382 { 00:12:54.382 "name": "BaseBdev2", 00:12:54.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.382 "is_configured": false, 00:12:54.382 "data_offset": 0, 00:12:54.382 "data_size": 0 00:12:54.382 } 00:12:54.382 ] 00:12:54.382 }' 00:12:54.382 13:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.382 13:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.950 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:55.209 [2024-07-25 13:13:05.478825] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.210 [2024-07-25 13:13:05.478864] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19f0810 name Existed_Raid, state configuring 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:55.210 [2024-07-25 13:13:05.643290] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.210 [2024-07-25 13:13:05.644664] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.210 [2024-07-25 13:13:05.644697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.210 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.468 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:55.468 "name": "Existed_Raid", 00:12:55.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.468 "strip_size_kb": 0, 00:12:55.468 "state": "configuring", 00:12:55.468 "raid_level": "raid1", 00:12:55.468 "superblock": false, 00:12:55.468 "num_base_bdevs": 2, 00:12:55.468 "num_base_bdevs_discovered": 1, 00:12:55.468 "num_base_bdevs_operational": 2, 00:12:55.468 "base_bdevs_list": [ 00:12:55.468 { 00:12:55.468 "name": "BaseBdev1", 00:12:55.468 "uuid": "f831e37a-5c2b-4e0d-84c9-59fac0072d99", 00:12:55.468 "is_configured": true, 00:12:55.468 "data_offset": 0, 00:12:55.469 "data_size": 65536 00:12:55.469 }, 00:12:55.469 { 00:12:55.469 "name": "BaseBdev2", 00:12:55.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.469 "is_configured": false, 00:12:55.469 "data_offset": 0, 00:12:55.469 "data_size": 0 00:12:55.469 } 00:12:55.469 ] 00:12:55.469 }' 00:12:55.469 13:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:55.469 13:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.036 13:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.294 [2024-07-25 13:13:06.576904] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.295 [2024-07-25 13:13:06.576941] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x19f1610 00:12:56.295 [2024-07-25 13:13:06.576949] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:56.295 [2024-07-25 13:13:06.577129] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19dd690 00:12:56.295 [2024-07-25 13:13:06.577254] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19f1610 00:12:56.295 [2024-07-25 13:13:06.577264] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x19f1610 00:12:56.295 [2024-07-25 13:13:06.577420] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.295 BaseBdev2 00:12:56.295 13:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:56.295 13:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:56.295 13:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.295 13:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:56.295 13:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.295 13:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.295 13:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.553 13:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:56.553 [ 00:12:56.553 { 00:12:56.553 "name": "BaseBdev2", 00:12:56.553 "aliases": [ 00:12:56.553 "1c73a730-2e18-44ca-afc9-d630658cf8e1" 00:12:56.553 ], 00:12:56.553 "product_name": "Malloc disk", 00:12:56.553 "block_size": 512, 00:12:56.553 "num_blocks": 65536, 00:12:56.553 "uuid": "1c73a730-2e18-44ca-afc9-d630658cf8e1", 00:12:56.553 "assigned_rate_limits": { 00:12:56.553 "rw_ios_per_sec": 0, 00:12:56.553 "rw_mbytes_per_sec": 0, 00:12:56.553 "r_mbytes_per_sec": 0, 00:12:56.553 "w_mbytes_per_sec": 0 00:12:56.554 }, 00:12:56.554 "claimed": true, 00:12:56.554 "claim_type": "exclusive_write", 00:12:56.554 "zoned": false, 00:12:56.554 "supported_io_types": { 00:12:56.554 "read": true, 00:12:56.554 "write": true, 00:12:56.554 "unmap": true, 00:12:56.554 "flush": true, 00:12:56.554 "reset": true, 00:12:56.554 "nvme_admin": false, 00:12:56.554 "nvme_io": false, 00:12:56.554 "nvme_io_md": false, 00:12:56.554 "write_zeroes": true, 00:12:56.554 "zcopy": true, 00:12:56.554 "get_zone_info": false, 00:12:56.554 "zone_management": false, 00:12:56.554 "zone_append": false, 00:12:56.554 "compare": false, 00:12:56.554 "compare_and_write": false, 00:12:56.554 "abort": true, 00:12:56.554 "seek_hole": false, 00:12:56.554 "seek_data": false, 00:12:56.554 "copy": true, 00:12:56.554 "nvme_iov_md": false 00:12:56.554 }, 00:12:56.554 "memory_domains": [ 00:12:56.554 { 00:12:56.554 "dma_device_id": "system", 00:12:56.554 "dma_device_type": 1 00:12:56.554 }, 00:12:56.554 { 00:12:56.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.554 "dma_device_type": 2 00:12:56.554 } 00:12:56.554 ], 00:12:56.554 "driver_specific": {} 00:12:56.554 } 00:12:56.554 ] 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:56.813 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:56.814 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.814 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.814 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.814 "name": "Existed_Raid", 00:12:56.814 "uuid": "dc26bc6c-b7dc-4170-b1dc-30f4a05ac60a", 00:12:56.814 "strip_size_kb": 0, 00:12:56.814 "state": "online", 00:12:56.814 "raid_level": "raid1", 00:12:56.814 "superblock": false, 00:12:56.814 "num_base_bdevs": 2, 00:12:56.814 "num_base_bdevs_discovered": 2, 00:12:56.814 "num_base_bdevs_operational": 2, 00:12:56.814 "base_bdevs_list": [ 00:12:56.814 { 00:12:56.814 "name": "BaseBdev1", 00:12:56.814 "uuid": "f831e37a-5c2b-4e0d-84c9-59fac0072d99", 00:12:56.814 "is_configured": true, 00:12:56.814 "data_offset": 0, 00:12:56.814 "data_size": 65536 00:12:56.814 }, 00:12:56.814 { 00:12:56.814 "name": "BaseBdev2", 00:12:56.814 "uuid": "1c73a730-2e18-44ca-afc9-d630658cf8e1", 00:12:56.814 "is_configured": true, 00:12:56.814 "data_offset": 0, 00:12:56.814 "data_size": 65536 00:12:56.814 } 00:12:56.814 ] 00:12:56.814 }' 00:12:56.814 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.814 13:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.382 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.382 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:57.382 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:57.382 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:57.382 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:57.382 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:57.382 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:57.382 13:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:57.641 [2024-07-25 13:13:08.033389] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.641 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:57.641 "name": "Existed_Raid", 00:12:57.641 "aliases": [ 00:12:57.641 "dc26bc6c-b7dc-4170-b1dc-30f4a05ac60a" 00:12:57.641 ], 00:12:57.641 "product_name": "Raid Volume", 00:12:57.641 "block_size": 512, 00:12:57.641 "num_blocks": 65536, 00:12:57.641 "uuid": "dc26bc6c-b7dc-4170-b1dc-30f4a05ac60a", 00:12:57.641 "assigned_rate_limits": { 00:12:57.641 "rw_ios_per_sec": 0, 00:12:57.641 "rw_mbytes_per_sec": 0, 00:12:57.641 "r_mbytes_per_sec": 0, 00:12:57.641 "w_mbytes_per_sec": 0 00:12:57.641 }, 00:12:57.641 "claimed": false, 00:12:57.641 "zoned": false, 00:12:57.641 "supported_io_types": { 00:12:57.641 "read": true, 00:12:57.641 "write": true, 00:12:57.641 "unmap": false, 00:12:57.641 "flush": false, 00:12:57.641 "reset": true, 00:12:57.641 "nvme_admin": false, 00:12:57.641 "nvme_io": false, 00:12:57.641 "nvme_io_md": false, 00:12:57.641 "write_zeroes": true, 00:12:57.641 "zcopy": false, 00:12:57.641 "get_zone_info": false, 00:12:57.641 "zone_management": false, 00:12:57.641 "zone_append": false, 00:12:57.641 "compare": false, 00:12:57.641 "compare_and_write": false, 00:12:57.641 "abort": false, 00:12:57.641 "seek_hole": false, 00:12:57.641 "seek_data": false, 00:12:57.641 "copy": false, 00:12:57.641 "nvme_iov_md": false 00:12:57.641 }, 00:12:57.641 "memory_domains": [ 00:12:57.641 { 00:12:57.641 "dma_device_id": "system", 00:12:57.641 "dma_device_type": 1 00:12:57.641 }, 00:12:57.641 { 00:12:57.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.641 "dma_device_type": 2 00:12:57.641 }, 00:12:57.641 { 00:12:57.641 "dma_device_id": "system", 00:12:57.641 "dma_device_type": 1 00:12:57.641 }, 00:12:57.641 { 00:12:57.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.641 "dma_device_type": 2 00:12:57.641 } 00:12:57.641 ], 00:12:57.641 "driver_specific": { 00:12:57.641 "raid": { 00:12:57.641 "uuid": "dc26bc6c-b7dc-4170-b1dc-30f4a05ac60a", 00:12:57.641 "strip_size_kb": 0, 00:12:57.641 "state": "online", 00:12:57.641 "raid_level": "raid1", 00:12:57.641 "superblock": false, 00:12:57.641 "num_base_bdevs": 2, 00:12:57.641 "num_base_bdevs_discovered": 2, 00:12:57.641 "num_base_bdevs_operational": 2, 00:12:57.641 "base_bdevs_list": [ 00:12:57.641 { 00:12:57.641 "name": "BaseBdev1", 00:12:57.641 "uuid": "f831e37a-5c2b-4e0d-84c9-59fac0072d99", 00:12:57.641 "is_configured": true, 00:12:57.641 "data_offset": 0, 00:12:57.641 "data_size": 65536 00:12:57.641 }, 00:12:57.641 { 00:12:57.641 "name": "BaseBdev2", 00:12:57.641 "uuid": "1c73a730-2e18-44ca-afc9-d630658cf8e1", 00:12:57.641 "is_configured": true, 00:12:57.641 "data_offset": 0, 00:12:57.641 "data_size": 65536 00:12:57.641 } 00:12:57.641 ] 00:12:57.641 } 00:12:57.641 } 00:12:57.641 }' 00:12:57.641 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.641 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:57.641 BaseBdev2' 00:12:57.641 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:57.641 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:57.641 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:57.900 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:57.900 "name": "BaseBdev1", 00:12:57.900 "aliases": [ 00:12:57.900 "f831e37a-5c2b-4e0d-84c9-59fac0072d99" 00:12:57.900 ], 00:12:57.900 "product_name": "Malloc disk", 00:12:57.900 "block_size": 512, 00:12:57.900 "num_blocks": 65536, 00:12:57.900 "uuid": "f831e37a-5c2b-4e0d-84c9-59fac0072d99", 00:12:57.900 "assigned_rate_limits": { 00:12:57.900 "rw_ios_per_sec": 0, 00:12:57.900 "rw_mbytes_per_sec": 0, 00:12:57.900 "r_mbytes_per_sec": 0, 00:12:57.900 "w_mbytes_per_sec": 0 00:12:57.900 }, 00:12:57.900 "claimed": true, 00:12:57.900 "claim_type": "exclusive_write", 00:12:57.900 "zoned": false, 00:12:57.900 "supported_io_types": { 00:12:57.900 "read": true, 00:12:57.900 "write": true, 00:12:57.900 "unmap": true, 00:12:57.900 "flush": true, 00:12:57.900 "reset": true, 00:12:57.900 "nvme_admin": false, 00:12:57.900 "nvme_io": false, 00:12:57.900 "nvme_io_md": false, 00:12:57.900 "write_zeroes": true, 00:12:57.900 "zcopy": true, 00:12:57.900 "get_zone_info": false, 00:12:57.900 "zone_management": false, 00:12:57.900 "zone_append": false, 00:12:57.900 "compare": false, 00:12:57.900 "compare_and_write": false, 00:12:57.900 "abort": true, 00:12:57.900 "seek_hole": false, 00:12:57.900 "seek_data": false, 00:12:57.900 "copy": true, 00:12:57.900 "nvme_iov_md": false 00:12:57.900 }, 00:12:57.900 "memory_domains": [ 00:12:57.900 { 00:12:57.900 "dma_device_id": "system", 00:12:57.900 "dma_device_type": 1 00:12:57.900 }, 00:12:57.900 { 00:12:57.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.900 "dma_device_type": 2 00:12:57.900 } 00:12:57.900 ], 00:12:57.900 "driver_specific": {} 00:12:57.900 }' 00:12:57.900 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:57.900 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.158 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.158 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.158 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.158 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.158 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.158 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.158 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.158 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.158 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.417 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.417 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.417 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.417 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:58.417 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.417 "name": "BaseBdev2", 00:12:58.417 "aliases": [ 00:12:58.417 "1c73a730-2e18-44ca-afc9-d630658cf8e1" 00:12:58.417 ], 00:12:58.417 "product_name": "Malloc disk", 00:12:58.417 "block_size": 512, 00:12:58.417 "num_blocks": 65536, 00:12:58.417 "uuid": "1c73a730-2e18-44ca-afc9-d630658cf8e1", 00:12:58.417 "assigned_rate_limits": { 00:12:58.417 "rw_ios_per_sec": 0, 00:12:58.417 "rw_mbytes_per_sec": 0, 00:12:58.417 "r_mbytes_per_sec": 0, 00:12:58.417 "w_mbytes_per_sec": 0 00:12:58.417 }, 00:12:58.417 "claimed": true, 00:12:58.417 "claim_type": "exclusive_write", 00:12:58.417 "zoned": false, 00:12:58.417 "supported_io_types": { 00:12:58.417 "read": true, 00:12:58.417 "write": true, 00:12:58.417 "unmap": true, 00:12:58.417 "flush": true, 00:12:58.417 "reset": true, 00:12:58.417 "nvme_admin": false, 00:12:58.417 "nvme_io": false, 00:12:58.417 "nvme_io_md": false, 00:12:58.417 "write_zeroes": true, 00:12:58.417 "zcopy": true, 00:12:58.417 "get_zone_info": false, 00:12:58.417 "zone_management": false, 00:12:58.417 "zone_append": false, 00:12:58.417 "compare": false, 00:12:58.417 "compare_and_write": false, 00:12:58.417 "abort": true, 00:12:58.417 "seek_hole": false, 00:12:58.417 "seek_data": false, 00:12:58.417 "copy": true, 00:12:58.417 "nvme_iov_md": false 00:12:58.417 }, 00:12:58.417 "memory_domains": [ 00:12:58.417 { 00:12:58.417 "dma_device_id": "system", 00:12:58.417 "dma_device_type": 1 00:12:58.417 }, 00:12:58.417 { 00:12:58.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.417 "dma_device_type": 2 00:12:58.417 } 00:12:58.417 ], 00:12:58.417 "driver_specific": {} 00:12:58.417 }' 00:12:58.417 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.678 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.678 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.678 13:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.678 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.678 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.678 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.678 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.678 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.678 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.937 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.937 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.937 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:59.197 [2024-07-25 13:13:09.448927] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.197 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.456 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:59.456 "name": "Existed_Raid", 00:12:59.456 "uuid": "dc26bc6c-b7dc-4170-b1dc-30f4a05ac60a", 00:12:59.456 "strip_size_kb": 0, 00:12:59.456 "state": "online", 00:12:59.456 "raid_level": "raid1", 00:12:59.456 "superblock": false, 00:12:59.456 "num_base_bdevs": 2, 00:12:59.456 "num_base_bdevs_discovered": 1, 00:12:59.456 "num_base_bdevs_operational": 1, 00:12:59.456 "base_bdevs_list": [ 00:12:59.456 { 00:12:59.456 "name": null, 00:12:59.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.456 "is_configured": false, 00:12:59.456 "data_offset": 0, 00:12:59.456 "data_size": 65536 00:12:59.456 }, 00:12:59.456 { 00:12:59.456 "name": "BaseBdev2", 00:12:59.456 "uuid": "1c73a730-2e18-44ca-afc9-d630658cf8e1", 00:12:59.456 "is_configured": true, 00:12:59.456 "data_offset": 0, 00:12:59.456 "data_size": 65536 00:12:59.456 } 00:12:59.456 ] 00:12:59.456 }' 00:12:59.456 13:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:59.456 13:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.026 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:00.026 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:00.026 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.026 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:00.026 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:00.026 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:00.026 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:00.285 [2024-07-25 13:13:10.713248] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:00.285 [2024-07-25 13:13:10.713322] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.285 [2024-07-25 13:13:10.723483] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.285 [2024-07-25 13:13:10.723512] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.285 [2024-07-25 13:13:10.723523] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19f1610 name Existed_Raid, state offline 00:13:00.285 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:00.285 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:00.285 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.285 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 841913 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 841913 ']' 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 841913 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 841913 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 841913' 00:13:00.544 killing process with pid 841913 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 841913 00:13:00.544 [2024-07-25 13:13:10.984287] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.544 13:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 841913 00:13:00.544 [2024-07-25 13:13:10.985145] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:00.804 00:13:00.804 real 0m9.493s 00:13:00.804 user 0m16.775s 00:13:00.804 sys 0m1.809s 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.804 ************************************ 00:13:00.804 END TEST raid_state_function_test 00:13:00.804 ************************************ 00:13:00.804 13:13:11 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:13:00.804 13:13:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:00.804 13:13:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.804 13:13:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.804 ************************************ 00:13:00.804 START TEST raid_state_function_test_sb 00:13:00.804 ************************************ 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=843735 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 843735' 00:13:00.804 Process raid pid: 843735 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 843735 /var/tmp/spdk-raid.sock 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 843735 ']' 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:00.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:00.804 13:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.064 [2024-07-25 13:13:11.330234] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:13:01.064 [2024-07-25 13:13:11.330291] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.064 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:01.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:01.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:01.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:01.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:01.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:01.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:01.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:01.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:01.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:01.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:01.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:01.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:01.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.065 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:01.065 [2024-07-25 13:13:11.462700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.065 [2024-07-25 13:13:11.549835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.324 [2024-07-25 13:13:11.615232] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.324 [2024-07-25 13:13:11.615271] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.892 13:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:01.892 13:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:01.892 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:02.151 [2024-07-25 13:13:12.434766] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:02.151 [2024-07-25 13:13:12.434804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:02.151 [2024-07-25 13:13:12.434814] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:02.151 [2024-07-25 13:13:12.434825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:02.151 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:02.151 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:02.151 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:02.151 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:02.151 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:02.151 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:02.151 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.151 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.151 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.151 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.151 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.151 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.410 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:02.410 "name": "Existed_Raid", 00:13:02.410 "uuid": "a04691b5-a804-4e49-9f29-bee3f191196c", 00:13:02.410 "strip_size_kb": 0, 00:13:02.410 "state": "configuring", 00:13:02.410 "raid_level": "raid1", 00:13:02.410 "superblock": true, 00:13:02.410 "num_base_bdevs": 2, 00:13:02.410 "num_base_bdevs_discovered": 0, 00:13:02.410 "num_base_bdevs_operational": 2, 00:13:02.410 "base_bdevs_list": [ 00:13:02.410 { 00:13:02.410 "name": "BaseBdev1", 00:13:02.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.410 "is_configured": false, 00:13:02.410 "data_offset": 0, 00:13:02.410 "data_size": 0 00:13:02.410 }, 00:13:02.410 { 00:13:02.410 "name": "BaseBdev2", 00:13:02.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.410 "is_configured": false, 00:13:02.410 "data_offset": 0, 00:13:02.410 "data_size": 0 00:13:02.410 } 00:13:02.410 ] 00:13:02.410 }' 00:13:02.410 13:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:02.410 13:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.979 13:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:02.979 [2024-07-25 13:13:13.465364] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:02.979 [2024-07-25 13:13:13.465395] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x229cf20 name Existed_Raid, state configuring 00:13:03.238 13:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:03.238 [2024-07-25 13:13:13.693971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:03.238 [2024-07-25 13:13:13.694006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:03.238 [2024-07-25 13:13:13.694015] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:03.238 [2024-07-25 13:13:13.694025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:03.238 13:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:03.497 [2024-07-25 13:13:13.875956] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.497 BaseBdev1 00:13:03.497 13:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:03.497 13:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:03.497 13:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:03.497 13:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:03.497 13:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:03.497 13:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:03.497 13:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:03.756 13:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:04.016 [ 00:13:04.016 { 00:13:04.016 "name": "BaseBdev1", 00:13:04.016 "aliases": [ 00:13:04.016 "08cb839a-95cf-4627-8788-839de36a2bce" 00:13:04.016 ], 00:13:04.016 "product_name": "Malloc disk", 00:13:04.016 "block_size": 512, 00:13:04.016 "num_blocks": 65536, 00:13:04.016 "uuid": "08cb839a-95cf-4627-8788-839de36a2bce", 00:13:04.016 "assigned_rate_limits": { 00:13:04.016 "rw_ios_per_sec": 0, 00:13:04.016 "rw_mbytes_per_sec": 0, 00:13:04.016 "r_mbytes_per_sec": 0, 00:13:04.016 "w_mbytes_per_sec": 0 00:13:04.016 }, 00:13:04.016 "claimed": true, 00:13:04.016 "claim_type": "exclusive_write", 00:13:04.016 "zoned": false, 00:13:04.016 "supported_io_types": { 00:13:04.016 "read": true, 00:13:04.016 "write": true, 00:13:04.016 "unmap": true, 00:13:04.016 "flush": true, 00:13:04.016 "reset": true, 00:13:04.016 "nvme_admin": false, 00:13:04.016 "nvme_io": false, 00:13:04.016 "nvme_io_md": false, 00:13:04.016 "write_zeroes": true, 00:13:04.016 "zcopy": true, 00:13:04.016 "get_zone_info": false, 00:13:04.016 "zone_management": false, 00:13:04.016 "zone_append": false, 00:13:04.016 "compare": false, 00:13:04.016 "compare_and_write": false, 00:13:04.016 "abort": true, 00:13:04.016 "seek_hole": false, 00:13:04.016 "seek_data": false, 00:13:04.016 "copy": true, 00:13:04.016 "nvme_iov_md": false 00:13:04.016 }, 00:13:04.016 "memory_domains": [ 00:13:04.016 { 00:13:04.016 "dma_device_id": "system", 00:13:04.016 "dma_device_type": 1 00:13:04.016 }, 00:13:04.016 { 00:13:04.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.016 "dma_device_type": 2 00:13:04.016 } 00:13:04.016 ], 00:13:04.016 "driver_specific": {} 00:13:04.016 } 00:13:04.016 ] 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.016 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.276 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.276 "name": "Existed_Raid", 00:13:04.276 "uuid": "57cc5f82-f375-406d-816e-1247e3adb456", 00:13:04.276 "strip_size_kb": 0, 00:13:04.276 "state": "configuring", 00:13:04.276 "raid_level": "raid1", 00:13:04.276 "superblock": true, 00:13:04.276 "num_base_bdevs": 2, 00:13:04.276 "num_base_bdevs_discovered": 1, 00:13:04.276 "num_base_bdevs_operational": 2, 00:13:04.276 "base_bdevs_list": [ 00:13:04.276 { 00:13:04.276 "name": "BaseBdev1", 00:13:04.276 "uuid": "08cb839a-95cf-4627-8788-839de36a2bce", 00:13:04.276 "is_configured": true, 00:13:04.276 "data_offset": 2048, 00:13:04.276 "data_size": 63488 00:13:04.276 }, 00:13:04.276 { 00:13:04.276 "name": "BaseBdev2", 00:13:04.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.276 "is_configured": false, 00:13:04.276 "data_offset": 0, 00:13:04.276 "data_size": 0 00:13:04.276 } 00:13:04.276 ] 00:13:04.276 }' 00:13:04.276 13:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.276 13:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.845 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:04.845 [2024-07-25 13:13:15.279645] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:04.845 [2024-07-25 13:13:15.279680] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x229c810 name Existed_Raid, state configuring 00:13:04.845 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:05.104 [2024-07-25 13:13:15.504283] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.104 [2024-07-25 13:13:15.505666] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:05.104 [2024-07-25 13:13:15.505697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.104 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.362 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:05.362 "name": "Existed_Raid", 00:13:05.362 "uuid": "05759540-02c0-4813-afc4-b7a1d373084b", 00:13:05.362 "strip_size_kb": 0, 00:13:05.362 "state": "configuring", 00:13:05.362 "raid_level": "raid1", 00:13:05.362 "superblock": true, 00:13:05.362 "num_base_bdevs": 2, 00:13:05.362 "num_base_bdevs_discovered": 1, 00:13:05.362 "num_base_bdevs_operational": 2, 00:13:05.362 "base_bdevs_list": [ 00:13:05.362 { 00:13:05.362 "name": "BaseBdev1", 00:13:05.362 "uuid": "08cb839a-95cf-4627-8788-839de36a2bce", 00:13:05.362 "is_configured": true, 00:13:05.362 "data_offset": 2048, 00:13:05.362 "data_size": 63488 00:13:05.362 }, 00:13:05.362 { 00:13:05.362 "name": "BaseBdev2", 00:13:05.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.362 "is_configured": false, 00:13:05.362 "data_offset": 0, 00:13:05.362 "data_size": 0 00:13:05.362 } 00:13:05.362 ] 00:13:05.362 }' 00:13:05.362 13:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:05.362 13:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.930 13:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:06.189 [2024-07-25 13:13:16.542265] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.189 [2024-07-25 13:13:16.542423] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x229d610 00:13:06.189 [2024-07-25 13:13:16.542438] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.189 [2024-07-25 13:13:16.542597] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2289690 00:13:06.189 [2024-07-25 13:13:16.542710] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x229d610 00:13:06.189 [2024-07-25 13:13:16.542720] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x229d610 00:13:06.189 [2024-07-25 13:13:16.542803] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.189 BaseBdev2 00:13:06.189 13:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:06.189 13:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:06.189 13:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:06.189 13:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:06.189 13:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:06.189 13:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:06.189 13:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:06.449 13:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:06.708 [ 00:13:06.708 { 00:13:06.708 "name": "BaseBdev2", 00:13:06.708 "aliases": [ 00:13:06.708 "1ebea0de-10bd-424d-8294-7b1ced297f73" 00:13:06.708 ], 00:13:06.708 "product_name": "Malloc disk", 00:13:06.708 "block_size": 512, 00:13:06.708 "num_blocks": 65536, 00:13:06.708 "uuid": "1ebea0de-10bd-424d-8294-7b1ced297f73", 00:13:06.708 "assigned_rate_limits": { 00:13:06.708 "rw_ios_per_sec": 0, 00:13:06.708 "rw_mbytes_per_sec": 0, 00:13:06.708 "r_mbytes_per_sec": 0, 00:13:06.708 "w_mbytes_per_sec": 0 00:13:06.708 }, 00:13:06.708 "claimed": true, 00:13:06.708 "claim_type": "exclusive_write", 00:13:06.708 "zoned": false, 00:13:06.708 "supported_io_types": { 00:13:06.708 "read": true, 00:13:06.708 "write": true, 00:13:06.708 "unmap": true, 00:13:06.708 "flush": true, 00:13:06.708 "reset": true, 00:13:06.708 "nvme_admin": false, 00:13:06.708 "nvme_io": false, 00:13:06.708 "nvme_io_md": false, 00:13:06.708 "write_zeroes": true, 00:13:06.708 "zcopy": true, 00:13:06.708 "get_zone_info": false, 00:13:06.708 "zone_management": false, 00:13:06.708 "zone_append": false, 00:13:06.708 "compare": false, 00:13:06.708 "compare_and_write": false, 00:13:06.708 "abort": true, 00:13:06.708 "seek_hole": false, 00:13:06.708 "seek_data": false, 00:13:06.708 "copy": true, 00:13:06.708 "nvme_iov_md": false 00:13:06.708 }, 00:13:06.708 "memory_domains": [ 00:13:06.708 { 00:13:06.708 "dma_device_id": "system", 00:13:06.708 "dma_device_type": 1 00:13:06.708 }, 00:13:06.708 { 00:13:06.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.708 "dma_device_type": 2 00:13:06.708 } 00:13:06.708 ], 00:13:06.708 "driver_specific": {} 00:13:06.708 } 00:13:06.708 ] 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.708 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.967 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:06.967 "name": "Existed_Raid", 00:13:06.967 "uuid": "05759540-02c0-4813-afc4-b7a1d373084b", 00:13:06.967 "strip_size_kb": 0, 00:13:06.967 "state": "online", 00:13:06.967 "raid_level": "raid1", 00:13:06.967 "superblock": true, 00:13:06.967 "num_base_bdevs": 2, 00:13:06.967 "num_base_bdevs_discovered": 2, 00:13:06.967 "num_base_bdevs_operational": 2, 00:13:06.967 "base_bdevs_list": [ 00:13:06.967 { 00:13:06.967 "name": "BaseBdev1", 00:13:06.967 "uuid": "08cb839a-95cf-4627-8788-839de36a2bce", 00:13:06.967 "is_configured": true, 00:13:06.967 "data_offset": 2048, 00:13:06.967 "data_size": 63488 00:13:06.967 }, 00:13:06.967 { 00:13:06.967 "name": "BaseBdev2", 00:13:06.967 "uuid": "1ebea0de-10bd-424d-8294-7b1ced297f73", 00:13:06.967 "is_configured": true, 00:13:06.967 "data_offset": 2048, 00:13:06.967 "data_size": 63488 00:13:06.967 } 00:13:06.967 ] 00:13:06.967 }' 00:13:06.967 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:06.967 13:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.536 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:07.536 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:07.536 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:07.536 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:07.536 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:07.536 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:07.536 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:07.536 13:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:07.536 [2024-07-25 13:13:18.010415] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.796 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:07.796 "name": "Existed_Raid", 00:13:07.796 "aliases": [ 00:13:07.796 "05759540-02c0-4813-afc4-b7a1d373084b" 00:13:07.796 ], 00:13:07.796 "product_name": "Raid Volume", 00:13:07.796 "block_size": 512, 00:13:07.796 "num_blocks": 63488, 00:13:07.796 "uuid": "05759540-02c0-4813-afc4-b7a1d373084b", 00:13:07.796 "assigned_rate_limits": { 00:13:07.796 "rw_ios_per_sec": 0, 00:13:07.796 "rw_mbytes_per_sec": 0, 00:13:07.796 "r_mbytes_per_sec": 0, 00:13:07.796 "w_mbytes_per_sec": 0 00:13:07.796 }, 00:13:07.796 "claimed": false, 00:13:07.796 "zoned": false, 00:13:07.796 "supported_io_types": { 00:13:07.796 "read": true, 00:13:07.796 "write": true, 00:13:07.796 "unmap": false, 00:13:07.796 "flush": false, 00:13:07.796 "reset": true, 00:13:07.796 "nvme_admin": false, 00:13:07.796 "nvme_io": false, 00:13:07.796 "nvme_io_md": false, 00:13:07.796 "write_zeroes": true, 00:13:07.796 "zcopy": false, 00:13:07.796 "get_zone_info": false, 00:13:07.796 "zone_management": false, 00:13:07.796 "zone_append": false, 00:13:07.796 "compare": false, 00:13:07.796 "compare_and_write": false, 00:13:07.796 "abort": false, 00:13:07.796 "seek_hole": false, 00:13:07.796 "seek_data": false, 00:13:07.796 "copy": false, 00:13:07.796 "nvme_iov_md": false 00:13:07.796 }, 00:13:07.796 "memory_domains": [ 00:13:07.796 { 00:13:07.796 "dma_device_id": "system", 00:13:07.796 "dma_device_type": 1 00:13:07.796 }, 00:13:07.796 { 00:13:07.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.796 "dma_device_type": 2 00:13:07.796 }, 00:13:07.796 { 00:13:07.796 "dma_device_id": "system", 00:13:07.797 "dma_device_type": 1 00:13:07.797 }, 00:13:07.797 { 00:13:07.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.797 "dma_device_type": 2 00:13:07.797 } 00:13:07.797 ], 00:13:07.797 "driver_specific": { 00:13:07.797 "raid": { 00:13:07.797 "uuid": "05759540-02c0-4813-afc4-b7a1d373084b", 00:13:07.797 "strip_size_kb": 0, 00:13:07.797 "state": "online", 00:13:07.797 "raid_level": "raid1", 00:13:07.797 "superblock": true, 00:13:07.797 "num_base_bdevs": 2, 00:13:07.797 "num_base_bdevs_discovered": 2, 00:13:07.797 "num_base_bdevs_operational": 2, 00:13:07.797 "base_bdevs_list": [ 00:13:07.797 { 00:13:07.797 "name": "BaseBdev1", 00:13:07.797 "uuid": "08cb839a-95cf-4627-8788-839de36a2bce", 00:13:07.797 "is_configured": true, 00:13:07.797 "data_offset": 2048, 00:13:07.797 "data_size": 63488 00:13:07.797 }, 00:13:07.797 { 00:13:07.797 "name": "BaseBdev2", 00:13:07.797 "uuid": "1ebea0de-10bd-424d-8294-7b1ced297f73", 00:13:07.797 "is_configured": true, 00:13:07.797 "data_offset": 2048, 00:13:07.797 "data_size": 63488 00:13:07.797 } 00:13:07.797 ] 00:13:07.797 } 00:13:07.797 } 00:13:07.797 }' 00:13:07.797 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:07.797 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:07.797 BaseBdev2' 00:13:07.797 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:07.797 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:07.797 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:08.116 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:08.116 "name": "BaseBdev1", 00:13:08.116 "aliases": [ 00:13:08.116 "08cb839a-95cf-4627-8788-839de36a2bce" 00:13:08.116 ], 00:13:08.116 "product_name": "Malloc disk", 00:13:08.116 "block_size": 512, 00:13:08.116 "num_blocks": 65536, 00:13:08.116 "uuid": "08cb839a-95cf-4627-8788-839de36a2bce", 00:13:08.116 "assigned_rate_limits": { 00:13:08.116 "rw_ios_per_sec": 0, 00:13:08.116 "rw_mbytes_per_sec": 0, 00:13:08.116 "r_mbytes_per_sec": 0, 00:13:08.116 "w_mbytes_per_sec": 0 00:13:08.116 }, 00:13:08.116 "claimed": true, 00:13:08.116 "claim_type": "exclusive_write", 00:13:08.116 "zoned": false, 00:13:08.116 "supported_io_types": { 00:13:08.116 "read": true, 00:13:08.116 "write": true, 00:13:08.116 "unmap": true, 00:13:08.116 "flush": true, 00:13:08.116 "reset": true, 00:13:08.116 "nvme_admin": false, 00:13:08.116 "nvme_io": false, 00:13:08.116 "nvme_io_md": false, 00:13:08.116 "write_zeroes": true, 00:13:08.116 "zcopy": true, 00:13:08.116 "get_zone_info": false, 00:13:08.116 "zone_management": false, 00:13:08.116 "zone_append": false, 00:13:08.116 "compare": false, 00:13:08.116 "compare_and_write": false, 00:13:08.116 "abort": true, 00:13:08.116 "seek_hole": false, 00:13:08.116 "seek_data": false, 00:13:08.116 "copy": true, 00:13:08.116 "nvme_iov_md": false 00:13:08.116 }, 00:13:08.116 "memory_domains": [ 00:13:08.116 { 00:13:08.116 "dma_device_id": "system", 00:13:08.116 "dma_device_type": 1 00:13:08.116 }, 00:13:08.116 { 00:13:08.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.116 "dma_device_type": 2 00:13:08.116 } 00:13:08.116 ], 00:13:08.116 "driver_specific": {} 00:13:08.116 }' 00:13:08.116 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:08.116 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:08.116 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:08.116 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:08.116 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:08.116 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:08.116 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:08.116 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:08.403 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:08.403 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:08.403 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:08.403 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:08.403 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:08.403 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:08.403 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:08.403 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:08.403 "name": "BaseBdev2", 00:13:08.403 "aliases": [ 00:13:08.403 "1ebea0de-10bd-424d-8294-7b1ced297f73" 00:13:08.403 ], 00:13:08.403 "product_name": "Malloc disk", 00:13:08.403 "block_size": 512, 00:13:08.403 "num_blocks": 65536, 00:13:08.403 "uuid": "1ebea0de-10bd-424d-8294-7b1ced297f73", 00:13:08.404 "assigned_rate_limits": { 00:13:08.404 "rw_ios_per_sec": 0, 00:13:08.404 "rw_mbytes_per_sec": 0, 00:13:08.404 "r_mbytes_per_sec": 0, 00:13:08.404 "w_mbytes_per_sec": 0 00:13:08.404 }, 00:13:08.404 "claimed": true, 00:13:08.404 "claim_type": "exclusive_write", 00:13:08.404 "zoned": false, 00:13:08.404 "supported_io_types": { 00:13:08.404 "read": true, 00:13:08.404 "write": true, 00:13:08.404 "unmap": true, 00:13:08.404 "flush": true, 00:13:08.404 "reset": true, 00:13:08.404 "nvme_admin": false, 00:13:08.404 "nvme_io": false, 00:13:08.404 "nvme_io_md": false, 00:13:08.404 "write_zeroes": true, 00:13:08.404 "zcopy": true, 00:13:08.404 "get_zone_info": false, 00:13:08.404 "zone_management": false, 00:13:08.404 "zone_append": false, 00:13:08.404 "compare": false, 00:13:08.404 "compare_and_write": false, 00:13:08.404 "abort": true, 00:13:08.404 "seek_hole": false, 00:13:08.404 "seek_data": false, 00:13:08.404 "copy": true, 00:13:08.404 "nvme_iov_md": false 00:13:08.404 }, 00:13:08.404 "memory_domains": [ 00:13:08.404 { 00:13:08.404 "dma_device_id": "system", 00:13:08.404 "dma_device_type": 1 00:13:08.404 }, 00:13:08.404 { 00:13:08.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.404 "dma_device_type": 2 00:13:08.404 } 00:13:08.404 ], 00:13:08.404 "driver_specific": {} 00:13:08.404 }' 00:13:08.664 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:08.664 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:08.664 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:08.664 13:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:08.664 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:08.664 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:08.664 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:08.664 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:08.664 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:08.664 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:08.924 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:08.924 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:08.924 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:09.183 [2024-07-25 13:13:19.430100] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.183 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.442 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.442 "name": "Existed_Raid", 00:13:09.442 "uuid": "05759540-02c0-4813-afc4-b7a1d373084b", 00:13:09.443 "strip_size_kb": 0, 00:13:09.443 "state": "online", 00:13:09.443 "raid_level": "raid1", 00:13:09.443 "superblock": true, 00:13:09.443 "num_base_bdevs": 2, 00:13:09.443 "num_base_bdevs_discovered": 1, 00:13:09.443 "num_base_bdevs_operational": 1, 00:13:09.443 "base_bdevs_list": [ 00:13:09.443 { 00:13:09.443 "name": null, 00:13:09.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.443 "is_configured": false, 00:13:09.443 "data_offset": 2048, 00:13:09.443 "data_size": 63488 00:13:09.443 }, 00:13:09.443 { 00:13:09.443 "name": "BaseBdev2", 00:13:09.443 "uuid": "1ebea0de-10bd-424d-8294-7b1ced297f73", 00:13:09.443 "is_configured": true, 00:13:09.443 "data_offset": 2048, 00:13:09.443 "data_size": 63488 00:13:09.443 } 00:13:09.443 ] 00:13:09.443 }' 00:13:09.443 13:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.443 13:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.011 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:10.011 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:10.011 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.011 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:10.011 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:10.011 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:10.011 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:10.270 [2024-07-25 13:13:20.694387] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:10.270 [2024-07-25 13:13:20.694460] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.270 [2024-07-25 13:13:20.704779] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.270 [2024-07-25 13:13:20.704809] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.270 [2024-07-25 13:13:20.704819] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x229d610 name Existed_Raid, state offline 00:13:10.270 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:10.270 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:10.270 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.270 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:10.529 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:10.529 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:10.529 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:13:10.529 13:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 843735 00:13:10.529 13:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 843735 ']' 00:13:10.529 13:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 843735 00:13:10.529 13:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:10.529 13:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.529 13:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 843735 00:13:10.529 13:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:10.529 13:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:10.529 13:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 843735' 00:13:10.529 killing process with pid 843735 00:13:10.529 13:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 843735 00:13:10.529 [2024-07-25 13:13:21.016310] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:10.529 13:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 843735 00:13:10.529 [2024-07-25 13:13:21.017152] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:10.788 13:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:10.788 00:13:10.788 real 0m9.943s 00:13:10.788 user 0m17.657s 00:13:10.788 sys 0m1.860s 00:13:10.788 13:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:10.788 13:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.788 ************************************ 00:13:10.788 END TEST raid_state_function_test_sb 00:13:10.788 ************************************ 00:13:10.788 13:13:21 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:13:10.788 13:13:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:10.788 13:13:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:10.788 13:13:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.047 ************************************ 00:13:11.047 START TEST raid_superblock_test 00:13:11.047 ************************************ 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=845762 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 845762 /var/tmp/spdk-raid.sock 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 845762 ']' 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.047 13:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:11.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:11.048 13:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.048 13:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.048 [2024-07-25 13:13:21.351631] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:13:11.048 [2024-07-25 13:13:21.351688] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid845762 ] 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:11.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.048 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:11.048 [2024-07-25 13:13:21.484597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.307 [2024-07-25 13:13:21.567302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.307 [2024-07-25 13:13:21.626641] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.307 [2024-07-25 13:13:21.626677] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.875 13:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:11.875 13:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:11.875 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:13:11.875 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:11.875 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:13:11.875 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:13:11.875 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:11.875 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.875 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.875 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.875 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:12.135 malloc1 00:13:12.135 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:12.394 [2024-07-25 13:13:22.684623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:12.394 [2024-07-25 13:13:22.684668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.394 [2024-07-25 13:13:22.684687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x240f2f0 00:13:12.394 [2024-07-25 13:13:22.684699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.394 [2024-07-25 13:13:22.686167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.394 [2024-07-25 13:13:22.686194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:12.394 pt1 00:13:12.394 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:13:12.394 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:12.394 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:13:12.394 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:13:12.394 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:12.394 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:12.394 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:13:12.394 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:12.394 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:12.654 malloc2 00:13:12.654 13:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.654 [2024-07-25 13:13:23.138220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:12.654 [2024-07-25 13:13:23.138260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.654 [2024-07-25 13:13:23.138275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25a6f70 00:13:12.654 [2024-07-25 13:13:23.138286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.654 [2024-07-25 13:13:23.139602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.654 [2024-07-25 13:13:23.139628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:12.914 pt2 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:13:12.914 [2024-07-25 13:13:23.370835] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:12.914 [2024-07-25 13:13:23.371886] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.914 [2024-07-25 13:13:23.371999] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x25a9760 00:13:12.914 [2024-07-25 13:13:23.372011] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:12.914 [2024-07-25 13:13:23.372188] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25ac5b0 00:13:12.914 [2024-07-25 13:13:23.372305] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25a9760 00:13:12.914 [2024-07-25 13:13:23.372315] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25a9760 00:13:12.914 [2024-07-25 13:13:23.372409] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.914 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.173 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.173 "name": "raid_bdev1", 00:13:13.173 "uuid": "0b456add-7401-4c5f-aeea-9b9f468d58aa", 00:13:13.173 "strip_size_kb": 0, 00:13:13.173 "state": "online", 00:13:13.173 "raid_level": "raid1", 00:13:13.173 "superblock": true, 00:13:13.173 "num_base_bdevs": 2, 00:13:13.173 "num_base_bdevs_discovered": 2, 00:13:13.173 "num_base_bdevs_operational": 2, 00:13:13.173 "base_bdevs_list": [ 00:13:13.173 { 00:13:13.173 "name": "pt1", 00:13:13.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.173 "is_configured": true, 00:13:13.173 "data_offset": 2048, 00:13:13.173 "data_size": 63488 00:13:13.173 }, 00:13:13.173 { 00:13:13.173 "name": "pt2", 00:13:13.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.173 "is_configured": true, 00:13:13.173 "data_offset": 2048, 00:13:13.173 "data_size": 63488 00:13:13.173 } 00:13:13.173 ] 00:13:13.173 }' 00:13:13.173 13:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.173 13:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.742 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:13:13.742 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:13.742 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:13.742 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:13.742 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:13.742 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:13.742 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:13.742 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:14.002 [2024-07-25 13:13:24.413796] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.002 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:14.002 "name": "raid_bdev1", 00:13:14.002 "aliases": [ 00:13:14.002 "0b456add-7401-4c5f-aeea-9b9f468d58aa" 00:13:14.002 ], 00:13:14.002 "product_name": "Raid Volume", 00:13:14.002 "block_size": 512, 00:13:14.002 "num_blocks": 63488, 00:13:14.002 "uuid": "0b456add-7401-4c5f-aeea-9b9f468d58aa", 00:13:14.002 "assigned_rate_limits": { 00:13:14.002 "rw_ios_per_sec": 0, 00:13:14.002 "rw_mbytes_per_sec": 0, 00:13:14.002 "r_mbytes_per_sec": 0, 00:13:14.002 "w_mbytes_per_sec": 0 00:13:14.002 }, 00:13:14.002 "claimed": false, 00:13:14.002 "zoned": false, 00:13:14.002 "supported_io_types": { 00:13:14.002 "read": true, 00:13:14.002 "write": true, 00:13:14.002 "unmap": false, 00:13:14.002 "flush": false, 00:13:14.002 "reset": true, 00:13:14.002 "nvme_admin": false, 00:13:14.002 "nvme_io": false, 00:13:14.002 "nvme_io_md": false, 00:13:14.002 "write_zeroes": true, 00:13:14.002 "zcopy": false, 00:13:14.002 "get_zone_info": false, 00:13:14.002 "zone_management": false, 00:13:14.002 "zone_append": false, 00:13:14.002 "compare": false, 00:13:14.002 "compare_and_write": false, 00:13:14.002 "abort": false, 00:13:14.002 "seek_hole": false, 00:13:14.002 "seek_data": false, 00:13:14.002 "copy": false, 00:13:14.002 "nvme_iov_md": false 00:13:14.002 }, 00:13:14.002 "memory_domains": [ 00:13:14.002 { 00:13:14.002 "dma_device_id": "system", 00:13:14.002 "dma_device_type": 1 00:13:14.002 }, 00:13:14.002 { 00:13:14.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.002 "dma_device_type": 2 00:13:14.002 }, 00:13:14.002 { 00:13:14.002 "dma_device_id": "system", 00:13:14.002 "dma_device_type": 1 00:13:14.002 }, 00:13:14.002 { 00:13:14.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.002 "dma_device_type": 2 00:13:14.002 } 00:13:14.002 ], 00:13:14.002 "driver_specific": { 00:13:14.002 "raid": { 00:13:14.002 "uuid": "0b456add-7401-4c5f-aeea-9b9f468d58aa", 00:13:14.002 "strip_size_kb": 0, 00:13:14.002 "state": "online", 00:13:14.002 "raid_level": "raid1", 00:13:14.002 "superblock": true, 00:13:14.002 "num_base_bdevs": 2, 00:13:14.002 "num_base_bdevs_discovered": 2, 00:13:14.002 "num_base_bdevs_operational": 2, 00:13:14.002 "base_bdevs_list": [ 00:13:14.002 { 00:13:14.002 "name": "pt1", 00:13:14.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:14.002 "is_configured": true, 00:13:14.002 "data_offset": 2048, 00:13:14.002 "data_size": 63488 00:13:14.002 }, 00:13:14.002 { 00:13:14.002 "name": "pt2", 00:13:14.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.002 "is_configured": true, 00:13:14.002 "data_offset": 2048, 00:13:14.002 "data_size": 63488 00:13:14.002 } 00:13:14.002 ] 00:13:14.002 } 00:13:14.002 } 00:13:14.002 }' 00:13:14.002 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:14.002 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:14.002 pt2' 00:13:14.002 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:14.002 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:14.002 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:14.262 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:14.262 "name": "pt1", 00:13:14.262 "aliases": [ 00:13:14.262 "00000000-0000-0000-0000-000000000001" 00:13:14.262 ], 00:13:14.262 "product_name": "passthru", 00:13:14.262 "block_size": 512, 00:13:14.262 "num_blocks": 65536, 00:13:14.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:14.262 "assigned_rate_limits": { 00:13:14.262 "rw_ios_per_sec": 0, 00:13:14.262 "rw_mbytes_per_sec": 0, 00:13:14.262 "r_mbytes_per_sec": 0, 00:13:14.262 "w_mbytes_per_sec": 0 00:13:14.262 }, 00:13:14.262 "claimed": true, 00:13:14.262 "claim_type": "exclusive_write", 00:13:14.262 "zoned": false, 00:13:14.262 "supported_io_types": { 00:13:14.262 "read": true, 00:13:14.262 "write": true, 00:13:14.262 "unmap": true, 00:13:14.262 "flush": true, 00:13:14.262 "reset": true, 00:13:14.262 "nvme_admin": false, 00:13:14.262 "nvme_io": false, 00:13:14.262 "nvme_io_md": false, 00:13:14.262 "write_zeroes": true, 00:13:14.262 "zcopy": true, 00:13:14.262 "get_zone_info": false, 00:13:14.262 "zone_management": false, 00:13:14.262 "zone_append": false, 00:13:14.262 "compare": false, 00:13:14.262 "compare_and_write": false, 00:13:14.262 "abort": true, 00:13:14.262 "seek_hole": false, 00:13:14.262 "seek_data": false, 00:13:14.262 "copy": true, 00:13:14.262 "nvme_iov_md": false 00:13:14.262 }, 00:13:14.262 "memory_domains": [ 00:13:14.262 { 00:13:14.262 "dma_device_id": "system", 00:13:14.262 "dma_device_type": 1 00:13:14.262 }, 00:13:14.262 { 00:13:14.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.262 "dma_device_type": 2 00:13:14.262 } 00:13:14.262 ], 00:13:14.262 "driver_specific": { 00:13:14.262 "passthru": { 00:13:14.262 "name": "pt1", 00:13:14.262 "base_bdev_name": "malloc1" 00:13:14.262 } 00:13:14.262 } 00:13:14.262 }' 00:13:14.262 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:14.521 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:14.521 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:14.521 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:14.521 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:14.521 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:14.521 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:14.521 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:14.521 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:14.521 13:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:14.521 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:14.781 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:14.781 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:14.781 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:14.781 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:15.040 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:15.040 "name": "pt2", 00:13:15.040 "aliases": [ 00:13:15.040 "00000000-0000-0000-0000-000000000002" 00:13:15.040 ], 00:13:15.040 "product_name": "passthru", 00:13:15.040 "block_size": 512, 00:13:15.040 "num_blocks": 65536, 00:13:15.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:15.040 "assigned_rate_limits": { 00:13:15.040 "rw_ios_per_sec": 0, 00:13:15.040 "rw_mbytes_per_sec": 0, 00:13:15.040 "r_mbytes_per_sec": 0, 00:13:15.040 "w_mbytes_per_sec": 0 00:13:15.040 }, 00:13:15.040 "claimed": true, 00:13:15.040 "claim_type": "exclusive_write", 00:13:15.040 "zoned": false, 00:13:15.040 "supported_io_types": { 00:13:15.040 "read": true, 00:13:15.040 "write": true, 00:13:15.040 "unmap": true, 00:13:15.040 "flush": true, 00:13:15.040 "reset": true, 00:13:15.040 "nvme_admin": false, 00:13:15.040 "nvme_io": false, 00:13:15.040 "nvme_io_md": false, 00:13:15.040 "write_zeroes": true, 00:13:15.040 "zcopy": true, 00:13:15.040 "get_zone_info": false, 00:13:15.040 "zone_management": false, 00:13:15.040 "zone_append": false, 00:13:15.040 "compare": false, 00:13:15.040 "compare_and_write": false, 00:13:15.040 "abort": true, 00:13:15.040 "seek_hole": false, 00:13:15.040 "seek_data": false, 00:13:15.040 "copy": true, 00:13:15.040 "nvme_iov_md": false 00:13:15.040 }, 00:13:15.040 "memory_domains": [ 00:13:15.040 { 00:13:15.040 "dma_device_id": "system", 00:13:15.040 "dma_device_type": 1 00:13:15.040 }, 00:13:15.040 { 00:13:15.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.040 "dma_device_type": 2 00:13:15.040 } 00:13:15.040 ], 00:13:15.040 "driver_specific": { 00:13:15.040 "passthru": { 00:13:15.040 "name": "pt2", 00:13:15.040 "base_bdev_name": "malloc2" 00:13:15.040 } 00:13:15.040 } 00:13:15.040 }' 00:13:15.040 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:15.040 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:15.040 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:15.040 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:15.040 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:15.040 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:15.040 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:15.040 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:15.040 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:15.040 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:15.299 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:15.299 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:15.299 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:15.299 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:13:15.559 [2024-07-25 13:13:25.813507] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.559 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=0b456add-7401-4c5f-aeea-9b9f468d58aa 00:13:15.559 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 0b456add-7401-4c5f-aeea-9b9f468d58aa ']' 00:13:15.559 13:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:15.559 [2024-07-25 13:13:26.041863] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.559 [2024-07-25 13:13:26.041880] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.559 [2024-07-25 13:13:26.041929] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.559 [2024-07-25 13:13:26.041978] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.559 [2024-07-25 13:13:26.041989] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25a9760 name raid_bdev1, state offline 00:13:15.819 13:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.819 13:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:13:15.819 13:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:13:15.819 13:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:13:15.819 13:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:13:15.819 13:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:16.078 13:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:13:16.078 13:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:16.338 13:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:16.339 13:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:16.600 13:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:13:16.859 [2024-07-25 13:13:27.184813] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:16.859 [2024-07-25 13:13:27.186076] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:16.859 [2024-07-25 13:13:27.186132] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:16.859 [2024-07-25 13:13:27.186185] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:16.859 [2024-07-25 13:13:27.186203] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:16.859 [2024-07-25 13:13:27.186212] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25a99f0 name raid_bdev1, state configuring 00:13:16.859 request: 00:13:16.859 { 00:13:16.859 "name": "raid_bdev1", 00:13:16.859 "raid_level": "raid1", 00:13:16.859 "base_bdevs": [ 00:13:16.859 "malloc1", 00:13:16.859 "malloc2" 00:13:16.859 ], 00:13:16.859 "superblock": false, 00:13:16.859 "method": "bdev_raid_create", 00:13:16.860 "req_id": 1 00:13:16.860 } 00:13:16.860 Got JSON-RPC error response 00:13:16.860 response: 00:13:16.860 { 00:13:16.860 "code": -17, 00:13:16.860 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:16.860 } 00:13:16.860 13:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:16.860 13:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:16.860 13:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:16.860 13:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:16.860 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.860 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:13:17.125 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:13:17.125 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:13:17.125 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:17.384 [2024-07-25 13:13:27.625934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:17.384 [2024-07-25 13:13:27.625979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.384 [2024-07-25 13:13:27.625997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b2bf0 00:13:17.384 [2024-07-25 13:13:27.626013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.384 [2024-07-25 13:13:27.627499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.384 [2024-07-25 13:13:27.627528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:17.384 [2024-07-25 13:13:27.627591] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:17.384 [2024-07-25 13:13:27.627615] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:17.384 pt1 00:13:17.384 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:17.384 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:17.384 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:17.384 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:17.384 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:17.384 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:17.384 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.384 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.384 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.384 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.384 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.384 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.643 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.643 "name": "raid_bdev1", 00:13:17.643 "uuid": "0b456add-7401-4c5f-aeea-9b9f468d58aa", 00:13:17.643 "strip_size_kb": 0, 00:13:17.643 "state": "configuring", 00:13:17.643 "raid_level": "raid1", 00:13:17.643 "superblock": true, 00:13:17.643 "num_base_bdevs": 2, 00:13:17.643 "num_base_bdevs_discovered": 1, 00:13:17.643 "num_base_bdevs_operational": 2, 00:13:17.643 "base_bdevs_list": [ 00:13:17.643 { 00:13:17.643 "name": "pt1", 00:13:17.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:17.643 "is_configured": true, 00:13:17.643 "data_offset": 2048, 00:13:17.643 "data_size": 63488 00:13:17.643 }, 00:13:17.643 { 00:13:17.643 "name": null, 00:13:17.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:17.643 "is_configured": false, 00:13:17.643 "data_offset": 2048, 00:13:17.643 "data_size": 63488 00:13:17.643 } 00:13:17.643 ] 00:13:17.643 }' 00:13:17.643 13:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.644 13:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:18.212 [2024-07-25 13:13:28.648623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:18.212 [2024-07-25 13:13:28.648670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.212 [2024-07-25 13:13:28.648687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25a9b10 00:13:18.212 [2024-07-25 13:13:28.648699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.212 [2024-07-25 13:13:28.649022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.212 [2024-07-25 13:13:28.649038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:18.212 [2024-07-25 13:13:28.649096] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:18.212 [2024-07-25 13:13:28.649112] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:18.212 [2024-07-25 13:13:28.649219] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x240dc30 00:13:18.212 [2024-07-25 13:13:28.649230] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:18.212 [2024-07-25 13:13:28.649388] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25a85d0 00:13:18.212 [2024-07-25 13:13:28.649503] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x240dc30 00:13:18.212 [2024-07-25 13:13:28.649512] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x240dc30 00:13:18.212 [2024-07-25 13:13:28.649597] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.212 pt2 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.212 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.471 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.471 "name": "raid_bdev1", 00:13:18.471 "uuid": "0b456add-7401-4c5f-aeea-9b9f468d58aa", 00:13:18.471 "strip_size_kb": 0, 00:13:18.471 "state": "online", 00:13:18.471 "raid_level": "raid1", 00:13:18.471 "superblock": true, 00:13:18.471 "num_base_bdevs": 2, 00:13:18.471 "num_base_bdevs_discovered": 2, 00:13:18.471 "num_base_bdevs_operational": 2, 00:13:18.471 "base_bdevs_list": [ 00:13:18.472 { 00:13:18.472 "name": "pt1", 00:13:18.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:18.472 "is_configured": true, 00:13:18.472 "data_offset": 2048, 00:13:18.472 "data_size": 63488 00:13:18.472 }, 00:13:18.472 { 00:13:18.472 "name": "pt2", 00:13:18.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:18.472 "is_configured": true, 00:13:18.472 "data_offset": 2048, 00:13:18.472 "data_size": 63488 00:13:18.472 } 00:13:18.472 ] 00:13:18.472 }' 00:13:18.472 13:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.472 13:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.041 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:13:19.041 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:19.041 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:19.041 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:19.041 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:19.041 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:19.041 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:19.041 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:19.300 [2024-07-25 13:13:29.623424] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.300 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:19.300 "name": "raid_bdev1", 00:13:19.300 "aliases": [ 00:13:19.300 "0b456add-7401-4c5f-aeea-9b9f468d58aa" 00:13:19.300 ], 00:13:19.300 "product_name": "Raid Volume", 00:13:19.300 "block_size": 512, 00:13:19.300 "num_blocks": 63488, 00:13:19.300 "uuid": "0b456add-7401-4c5f-aeea-9b9f468d58aa", 00:13:19.300 "assigned_rate_limits": { 00:13:19.300 "rw_ios_per_sec": 0, 00:13:19.300 "rw_mbytes_per_sec": 0, 00:13:19.300 "r_mbytes_per_sec": 0, 00:13:19.300 "w_mbytes_per_sec": 0 00:13:19.300 }, 00:13:19.300 "claimed": false, 00:13:19.300 "zoned": false, 00:13:19.300 "supported_io_types": { 00:13:19.300 "read": true, 00:13:19.300 "write": true, 00:13:19.300 "unmap": false, 00:13:19.300 "flush": false, 00:13:19.300 "reset": true, 00:13:19.300 "nvme_admin": false, 00:13:19.300 "nvme_io": false, 00:13:19.300 "nvme_io_md": false, 00:13:19.300 "write_zeroes": true, 00:13:19.300 "zcopy": false, 00:13:19.300 "get_zone_info": false, 00:13:19.300 "zone_management": false, 00:13:19.300 "zone_append": false, 00:13:19.300 "compare": false, 00:13:19.300 "compare_and_write": false, 00:13:19.300 "abort": false, 00:13:19.300 "seek_hole": false, 00:13:19.300 "seek_data": false, 00:13:19.300 "copy": false, 00:13:19.300 "nvme_iov_md": false 00:13:19.300 }, 00:13:19.300 "memory_domains": [ 00:13:19.300 { 00:13:19.300 "dma_device_id": "system", 00:13:19.300 "dma_device_type": 1 00:13:19.300 }, 00:13:19.300 { 00:13:19.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.300 "dma_device_type": 2 00:13:19.300 }, 00:13:19.300 { 00:13:19.300 "dma_device_id": "system", 00:13:19.300 "dma_device_type": 1 00:13:19.300 }, 00:13:19.300 { 00:13:19.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.300 "dma_device_type": 2 00:13:19.300 } 00:13:19.300 ], 00:13:19.300 "driver_specific": { 00:13:19.300 "raid": { 00:13:19.301 "uuid": "0b456add-7401-4c5f-aeea-9b9f468d58aa", 00:13:19.301 "strip_size_kb": 0, 00:13:19.301 "state": "online", 00:13:19.301 "raid_level": "raid1", 00:13:19.301 "superblock": true, 00:13:19.301 "num_base_bdevs": 2, 00:13:19.301 "num_base_bdevs_discovered": 2, 00:13:19.301 "num_base_bdevs_operational": 2, 00:13:19.301 "base_bdevs_list": [ 00:13:19.301 { 00:13:19.301 "name": "pt1", 00:13:19.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.301 "is_configured": true, 00:13:19.301 "data_offset": 2048, 00:13:19.301 "data_size": 63488 00:13:19.301 }, 00:13:19.301 { 00:13:19.301 "name": "pt2", 00:13:19.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.301 "is_configured": true, 00:13:19.301 "data_offset": 2048, 00:13:19.301 "data_size": 63488 00:13:19.301 } 00:13:19.301 ] 00:13:19.301 } 00:13:19.301 } 00:13:19.301 }' 00:13:19.301 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.301 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:19.301 pt2' 00:13:19.301 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:19.301 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:19.301 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:19.571 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:19.571 "name": "pt1", 00:13:19.571 "aliases": [ 00:13:19.571 "00000000-0000-0000-0000-000000000001" 00:13:19.571 ], 00:13:19.571 "product_name": "passthru", 00:13:19.571 "block_size": 512, 00:13:19.571 "num_blocks": 65536, 00:13:19.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.571 "assigned_rate_limits": { 00:13:19.571 "rw_ios_per_sec": 0, 00:13:19.571 "rw_mbytes_per_sec": 0, 00:13:19.571 "r_mbytes_per_sec": 0, 00:13:19.571 "w_mbytes_per_sec": 0 00:13:19.571 }, 00:13:19.571 "claimed": true, 00:13:19.571 "claim_type": "exclusive_write", 00:13:19.571 "zoned": false, 00:13:19.571 "supported_io_types": { 00:13:19.571 "read": true, 00:13:19.571 "write": true, 00:13:19.571 "unmap": true, 00:13:19.571 "flush": true, 00:13:19.571 "reset": true, 00:13:19.571 "nvme_admin": false, 00:13:19.571 "nvme_io": false, 00:13:19.571 "nvme_io_md": false, 00:13:19.571 "write_zeroes": true, 00:13:19.571 "zcopy": true, 00:13:19.571 "get_zone_info": false, 00:13:19.571 "zone_management": false, 00:13:19.571 "zone_append": false, 00:13:19.571 "compare": false, 00:13:19.571 "compare_and_write": false, 00:13:19.571 "abort": true, 00:13:19.571 "seek_hole": false, 00:13:19.571 "seek_data": false, 00:13:19.571 "copy": true, 00:13:19.571 "nvme_iov_md": false 00:13:19.571 }, 00:13:19.571 "memory_domains": [ 00:13:19.571 { 00:13:19.571 "dma_device_id": "system", 00:13:19.571 "dma_device_type": 1 00:13:19.571 }, 00:13:19.571 { 00:13:19.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.571 "dma_device_type": 2 00:13:19.571 } 00:13:19.571 ], 00:13:19.571 "driver_specific": { 00:13:19.571 "passthru": { 00:13:19.571 "name": "pt1", 00:13:19.571 "base_bdev_name": "malloc1" 00:13:19.571 } 00:13:19.571 } 00:13:19.571 }' 00:13:19.571 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.571 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.571 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:19.571 13:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.571 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.834 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:19.834 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.834 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.834 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:19.834 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.834 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.834 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.834 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:19.834 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:19.834 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:20.093 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:20.093 "name": "pt2", 00:13:20.093 "aliases": [ 00:13:20.094 "00000000-0000-0000-0000-000000000002" 00:13:20.094 ], 00:13:20.094 "product_name": "passthru", 00:13:20.094 "block_size": 512, 00:13:20.094 "num_blocks": 65536, 00:13:20.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:20.094 "assigned_rate_limits": { 00:13:20.094 "rw_ios_per_sec": 0, 00:13:20.094 "rw_mbytes_per_sec": 0, 00:13:20.094 "r_mbytes_per_sec": 0, 00:13:20.094 "w_mbytes_per_sec": 0 00:13:20.094 }, 00:13:20.094 "claimed": true, 00:13:20.094 "claim_type": "exclusive_write", 00:13:20.094 "zoned": false, 00:13:20.094 "supported_io_types": { 00:13:20.094 "read": true, 00:13:20.094 "write": true, 00:13:20.094 "unmap": true, 00:13:20.094 "flush": true, 00:13:20.094 "reset": true, 00:13:20.094 "nvme_admin": false, 00:13:20.094 "nvme_io": false, 00:13:20.094 "nvme_io_md": false, 00:13:20.094 "write_zeroes": true, 00:13:20.094 "zcopy": true, 00:13:20.094 "get_zone_info": false, 00:13:20.094 "zone_management": false, 00:13:20.094 "zone_append": false, 00:13:20.094 "compare": false, 00:13:20.094 "compare_and_write": false, 00:13:20.094 "abort": true, 00:13:20.094 "seek_hole": false, 00:13:20.094 "seek_data": false, 00:13:20.094 "copy": true, 00:13:20.094 "nvme_iov_md": false 00:13:20.094 }, 00:13:20.094 "memory_domains": [ 00:13:20.094 { 00:13:20.094 "dma_device_id": "system", 00:13:20.094 "dma_device_type": 1 00:13:20.094 }, 00:13:20.094 { 00:13:20.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.094 "dma_device_type": 2 00:13:20.094 } 00:13:20.094 ], 00:13:20.094 "driver_specific": { 00:13:20.094 "passthru": { 00:13:20.094 "name": "pt2", 00:13:20.094 "base_bdev_name": "malloc2" 00:13:20.094 } 00:13:20.094 } 00:13:20.094 }' 00:13:20.094 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.094 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.094 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:20.094 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.353 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.353 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:20.353 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.353 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.353 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:20.353 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.353 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.353 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:20.353 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:20.353 13:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:13:20.612 [2024-07-25 13:13:31.027132] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.612 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 0b456add-7401-4c5f-aeea-9b9f468d58aa '!=' 0b456add-7401-4c5f-aeea-9b9f468d58aa ']' 00:13:20.612 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:13:20.612 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:20.612 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:20.612 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:20.870 [2024-07-25 13:13:31.255519] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:20.870 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:20.870 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:20.870 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:20.870 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:20.870 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:20.870 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:20.870 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:20.870 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:20.870 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:20.870 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:20.870 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.870 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.129 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:21.129 "name": "raid_bdev1", 00:13:21.129 "uuid": "0b456add-7401-4c5f-aeea-9b9f468d58aa", 00:13:21.129 "strip_size_kb": 0, 00:13:21.129 "state": "online", 00:13:21.129 "raid_level": "raid1", 00:13:21.129 "superblock": true, 00:13:21.129 "num_base_bdevs": 2, 00:13:21.129 "num_base_bdevs_discovered": 1, 00:13:21.129 "num_base_bdevs_operational": 1, 00:13:21.129 "base_bdevs_list": [ 00:13:21.129 { 00:13:21.129 "name": null, 00:13:21.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.129 "is_configured": false, 00:13:21.129 "data_offset": 2048, 00:13:21.129 "data_size": 63488 00:13:21.129 }, 00:13:21.129 { 00:13:21.129 "name": "pt2", 00:13:21.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:21.129 "is_configured": true, 00:13:21.129 "data_offset": 2048, 00:13:21.129 "data_size": 63488 00:13:21.129 } 00:13:21.129 ] 00:13:21.129 }' 00:13:21.129 13:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:21.129 13:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.709 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:21.988 [2024-07-25 13:13:32.262144] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:21.988 [2024-07-25 13:13:32.262171] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:21.988 [2024-07-25 13:13:32.262221] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.988 [2024-07-25 13:13:32.262260] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:21.988 [2024-07-25 13:13:32.262271] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x240dc30 name raid_bdev1, state offline 00:13:21.988 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.988 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:13:22.265 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:13:22.265 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:13:22.265 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:13:22.265 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:13:22.265 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:22.265 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:13:22.265 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:13:22.265 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:13:22.265 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:13:22.265 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=1 00:13:22.265 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:22.524 [2024-07-25 13:13:32.935879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:22.524 [2024-07-25 13:13:32.935925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.524 [2024-07-25 13:13:32.935943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25a8b60 00:13:22.524 [2024-07-25 13:13:32.935955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.524 [2024-07-25 13:13:32.937468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.524 [2024-07-25 13:13:32.937497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:22.524 [2024-07-25 13:13:32.937560] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:22.524 [2024-07-25 13:13:32.937583] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:22.524 [2024-07-25 13:13:32.937661] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x240ddd0 00:13:22.524 [2024-07-25 13:13:32.937671] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:22.524 [2024-07-25 13:13:32.937827] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25a85d0 00:13:22.524 [2024-07-25 13:13:32.937938] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x240ddd0 00:13:22.524 [2024-07-25 13:13:32.937947] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x240ddd0 00:13:22.524 [2024-07-25 13:13:32.938038] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.524 pt2 00:13:22.524 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:22.524 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:22.524 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:22.524 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:22.524 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:22.524 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:22.524 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:22.524 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:22.524 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:22.524 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:22.524 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.524 13:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.783 13:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.783 "name": "raid_bdev1", 00:13:22.783 "uuid": "0b456add-7401-4c5f-aeea-9b9f468d58aa", 00:13:22.783 "strip_size_kb": 0, 00:13:22.783 "state": "online", 00:13:22.783 "raid_level": "raid1", 00:13:22.783 "superblock": true, 00:13:22.783 "num_base_bdevs": 2, 00:13:22.783 "num_base_bdevs_discovered": 1, 00:13:22.783 "num_base_bdevs_operational": 1, 00:13:22.783 "base_bdevs_list": [ 00:13:22.783 { 00:13:22.783 "name": null, 00:13:22.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.783 "is_configured": false, 00:13:22.783 "data_offset": 2048, 00:13:22.783 "data_size": 63488 00:13:22.783 }, 00:13:22.783 { 00:13:22.783 "name": "pt2", 00:13:22.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:22.783 "is_configured": true, 00:13:22.783 "data_offset": 2048, 00:13:22.783 "data_size": 63488 00:13:22.783 } 00:13:22.783 ] 00:13:22.783 }' 00:13:22.783 13:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.783 13:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.354 13:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:23.613 [2024-07-25 13:13:33.970639] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:23.613 [2024-07-25 13:13:33.970666] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.613 [2024-07-25 13:13:33.970718] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.613 [2024-07-25 13:13:33.970758] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.613 [2024-07-25 13:13:33.970768] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x240ddd0 name raid_bdev1, state offline 00:13:23.613 13:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.613 13:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:13:23.872 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:13:23.872 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:13:23.872 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:13:23.872 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:24.131 [2024-07-25 13:13:34.419802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:24.131 [2024-07-25 13:13:34.419851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.131 [2024-07-25 13:13:34.419870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25a7b00 00:13:24.131 [2024-07-25 13:13:34.419881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.131 [2024-07-25 13:13:34.421403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.131 [2024-07-25 13:13:34.421432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:24.131 [2024-07-25 13:13:34.421496] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:24.131 [2024-07-25 13:13:34.421518] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:24.131 [2024-07-25 13:13:34.421610] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:24.131 [2024-07-25 13:13:34.421622] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.131 [2024-07-25 13:13:34.421634] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25ad330 name raid_bdev1, state configuring 00:13:24.131 [2024-07-25 13:13:34.421654] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:24.131 [2024-07-25 13:13:34.421702] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x25ad330 00:13:24.131 [2024-07-25 13:13:34.421712] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:24.131 [2024-07-25 13:13:34.421866] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25a7ec0 00:13:24.131 [2024-07-25 13:13:34.421978] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25ad330 00:13:24.131 [2024-07-25 13:13:34.421987] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25ad330 00:13:24.131 [2024-07-25 13:13:34.422074] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.131 pt1 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.131 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.389 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:24.389 "name": "raid_bdev1", 00:13:24.389 "uuid": "0b456add-7401-4c5f-aeea-9b9f468d58aa", 00:13:24.389 "strip_size_kb": 0, 00:13:24.389 "state": "online", 00:13:24.389 "raid_level": "raid1", 00:13:24.389 "superblock": true, 00:13:24.389 "num_base_bdevs": 2, 00:13:24.389 "num_base_bdevs_discovered": 1, 00:13:24.389 "num_base_bdevs_operational": 1, 00:13:24.389 "base_bdevs_list": [ 00:13:24.389 { 00:13:24.389 "name": null, 00:13:24.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.389 "is_configured": false, 00:13:24.389 "data_offset": 2048, 00:13:24.389 "data_size": 63488 00:13:24.389 }, 00:13:24.389 { 00:13:24.389 "name": "pt2", 00:13:24.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.389 "is_configured": true, 00:13:24.389 "data_offset": 2048, 00:13:24.389 "data_size": 63488 00:13:24.390 } 00:13:24.390 ] 00:13:24.390 }' 00:13:24.390 13:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:24.390 13:13:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.959 13:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:24.959 13:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:25.219 13:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:13:25.219 13:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:25.219 13:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:13:25.219 [2024-07-25 13:13:35.675319] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.219 13:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 0b456add-7401-4c5f-aeea-9b9f468d58aa '!=' 0b456add-7401-4c5f-aeea-9b9f468d58aa ']' 00:13:25.219 13:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 845762 00:13:25.219 13:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 845762 ']' 00:13:25.219 13:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 845762 00:13:25.219 13:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:25.219 13:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:25.219 13:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 845762 00:13:25.477 13:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:25.477 13:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:25.477 13:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 845762' 00:13:25.477 killing process with pid 845762 00:13:25.477 13:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 845762 00:13:25.477 [2024-07-25 13:13:35.755115] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.477 [2024-07-25 13:13:35.755171] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.477 [2024-07-25 13:13:35.755209] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.477 [2024-07-25 13:13:35.755219] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25ad330 name raid_bdev1, state offline 00:13:25.477 13:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 845762 00:13:25.477 [2024-07-25 13:13:35.771100] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:25.477 13:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:13:25.477 00:13:25.477 real 0m14.671s 00:13:25.477 user 0m26.549s 00:13:25.477 sys 0m2.758s 00:13:25.477 13:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.477 13:13:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.477 ************************************ 00:13:25.477 END TEST raid_superblock_test 00:13:25.477 ************************************ 00:13:25.737 13:13:36 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:13:25.737 13:13:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:25.737 13:13:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.737 13:13:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:25.737 ************************************ 00:13:25.737 START TEST raid_read_error_test 00:13:25.737 ************************************ 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.J62owTLGCi 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=848517 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 848517 /var/tmp/spdk-raid.sock 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 848517 ']' 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:25.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:25.737 13:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.737 [2024-07-25 13:13:36.121591] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:13:25.737 [2024-07-25 13:13:36.121649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid848517 ] 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:25.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:25.737 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:25.996 [2024-07-25 13:13:36.252162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.996 [2024-07-25 13:13:36.339554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.996 [2024-07-25 13:13:36.402229] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.996 [2024-07-25 13:13:36.402264] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.563 13:13:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:26.563 13:13:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:26.563 13:13:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:26.563 13:13:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:26.822 BaseBdev1_malloc 00:13:26.822 13:13:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:27.081 true 00:13:27.081 13:13:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:27.341 [2024-07-25 13:13:37.688759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:27.341 [2024-07-25 13:13:37.688799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.341 [2024-07-25 13:13:37.688817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e911d0 00:13:27.341 [2024-07-25 13:13:37.688829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.341 [2024-07-25 13:13:37.690410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.341 [2024-07-25 13:13:37.690437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:27.341 BaseBdev1 00:13:27.341 13:13:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:27.341 13:13:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:27.600 BaseBdev2_malloc 00:13:27.600 13:13:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:27.858 true 00:13:27.858 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:27.859 [2024-07-25 13:13:38.342793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:27.859 [2024-07-25 13:13:38.342831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.859 [2024-07-25 13:13:38.342849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e94710 00:13:27.859 [2024-07-25 13:13:38.342861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.859 [2024-07-25 13:13:38.344216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.859 [2024-07-25 13:13:38.344248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:28.118 BaseBdev2 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:28.118 [2024-07-25 13:13:38.567409] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.118 [2024-07-25 13:13:38.568656] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.118 [2024-07-25 13:13:38.568820] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e97bc0 00:13:28.118 [2024-07-25 13:13:38.568832] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:28.118 [2024-07-25 13:13:38.569012] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e9aa10 00:13:28.118 [2024-07-25 13:13:38.569161] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e97bc0 00:13:28.118 [2024-07-25 13:13:38.569171] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e97bc0 00:13:28.118 [2024-07-25 13:13:38.569285] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.118 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.377 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.377 "name": "raid_bdev1", 00:13:28.377 "uuid": "045e7d31-c11a-418f-a143-e2683501d77e", 00:13:28.377 "strip_size_kb": 0, 00:13:28.377 "state": "online", 00:13:28.377 "raid_level": "raid1", 00:13:28.377 "superblock": true, 00:13:28.377 "num_base_bdevs": 2, 00:13:28.377 "num_base_bdevs_discovered": 2, 00:13:28.377 "num_base_bdevs_operational": 2, 00:13:28.377 "base_bdevs_list": [ 00:13:28.377 { 00:13:28.377 "name": "BaseBdev1", 00:13:28.377 "uuid": "d6ff0f06-fefd-5f8a-9647-2a0b1f4b7022", 00:13:28.377 "is_configured": true, 00:13:28.377 "data_offset": 2048, 00:13:28.377 "data_size": 63488 00:13:28.377 }, 00:13:28.377 { 00:13:28.377 "name": "BaseBdev2", 00:13:28.377 "uuid": "4f230004-3a6d-5e8f-9664-902c2e4d7c32", 00:13:28.377 "is_configured": true, 00:13:28.377 "data_offset": 2048, 00:13:28.377 "data_size": 63488 00:13:28.377 } 00:13:28.377 ] 00:13:28.377 }' 00:13:28.377 13:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.377 13:13:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.945 13:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:13:28.945 13:13:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:29.203 [2024-07-25 13:13:39.486077] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e98a10 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:30.137 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:30.395 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.395 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.395 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:30.395 "name": "raid_bdev1", 00:13:30.395 "uuid": "045e7d31-c11a-418f-a143-e2683501d77e", 00:13:30.395 "strip_size_kb": 0, 00:13:30.395 "state": "online", 00:13:30.395 "raid_level": "raid1", 00:13:30.395 "superblock": true, 00:13:30.395 "num_base_bdevs": 2, 00:13:30.395 "num_base_bdevs_discovered": 2, 00:13:30.395 "num_base_bdevs_operational": 2, 00:13:30.395 "base_bdevs_list": [ 00:13:30.395 { 00:13:30.395 "name": "BaseBdev1", 00:13:30.395 "uuid": "d6ff0f06-fefd-5f8a-9647-2a0b1f4b7022", 00:13:30.395 "is_configured": true, 00:13:30.395 "data_offset": 2048, 00:13:30.395 "data_size": 63488 00:13:30.395 }, 00:13:30.395 { 00:13:30.395 "name": "BaseBdev2", 00:13:30.395 "uuid": "4f230004-3a6d-5e8f-9664-902c2e4d7c32", 00:13:30.395 "is_configured": true, 00:13:30.395 "data_offset": 2048, 00:13:30.395 "data_size": 63488 00:13:30.395 } 00:13:30.395 ] 00:13:30.395 }' 00:13:30.395 13:13:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:30.395 13:13:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.962 13:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:31.221 [2024-07-25 13:13:41.573739] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.221 [2024-07-25 13:13:41.573773] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.221 [2024-07-25 13:13:41.576648] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.221 [2024-07-25 13:13:41.576680] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.221 [2024-07-25 13:13:41.576743] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.221 [2024-07-25 13:13:41.576754] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e97bc0 name raid_bdev1, state offline 00:13:31.221 0 00:13:31.222 13:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 848517 00:13:31.222 13:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 848517 ']' 00:13:31.222 13:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 848517 00:13:31.222 13:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:31.222 13:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:31.222 13:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 848517 00:13:31.222 13:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:31.222 13:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:31.222 13:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 848517' 00:13:31.222 killing process with pid 848517 00:13:31.222 13:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 848517 00:13:31.222 [2024-07-25 13:13:41.652349] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.222 13:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 848517 00:13:31.222 [2024-07-25 13:13:41.662300] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.482 13:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.J62owTLGCi 00:13:31.482 13:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:13:31.482 13:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:13:31.482 13:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:13:31.482 13:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:13:31.482 13:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:31.482 13:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:31.482 13:13:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:31.482 00:13:31.482 real 0m5.821s 00:13:31.482 user 0m9.065s 00:13:31.482 sys 0m0.982s 00:13:31.482 13:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.482 13:13:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.482 ************************************ 00:13:31.482 END TEST raid_read_error_test 00:13:31.482 ************************************ 00:13:31.482 13:13:41 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:13:31.482 13:13:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:31.482 13:13:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.482 13:13:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.482 ************************************ 00:13:31.482 START TEST raid_write_error_test 00:13:31.482 ************************************ 00:13:31.482 13:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:13:31.482 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:13:31.482 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:13:31.482 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:13:31.482 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:13:31.482 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:31.482 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:13:31.482 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:31.482 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:31.482 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:13:31.482 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.sfnz0NvZHZ 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=849674 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 849674 /var/tmp/spdk-raid.sock 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 849674 ']' 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:31.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:31.483 13:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.743 13:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.743 [2024-07-25 13:13:42.025580] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:13:31.743 [2024-07-25 13:13:42.025644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid849674 ] 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:31.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.743 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:31.743 [2024-07-25 13:13:42.158003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.002 [2024-07-25 13:13:42.238030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.002 [2024-07-25 13:13:42.299044] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.002 [2024-07-25 13:13:42.299079] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.569 13:13:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:32.569 13:13:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:32.569 13:13:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:32.569 13:13:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:33.137 BaseBdev1_malloc 00:13:33.137 13:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:33.395 true 00:13:33.395 13:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:33.395 [2024-07-25 13:13:43.881983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:33.395 [2024-07-25 13:13:43.882024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.395 [2024-07-25 13:13:43.882041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9971d0 00:13:33.395 [2024-07-25 13:13:43.882053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.395 [2024-07-25 13:13:43.883589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.395 [2024-07-25 13:13:43.883617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:33.653 BaseBdev1 00:13:33.653 13:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:33.653 13:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:33.653 BaseBdev2_malloc 00:13:33.653 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:33.911 true 00:13:33.911 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:34.170 [2024-07-25 13:13:44.563981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:34.170 [2024-07-25 13:13:44.564018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.170 [2024-07-25 13:13:44.564035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x99a710 00:13:34.170 [2024-07-25 13:13:44.564046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.170 [2024-07-25 13:13:44.565359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.170 [2024-07-25 13:13:44.565385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:34.170 BaseBdev2 00:13:34.171 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:34.429 [2024-07-25 13:13:44.792609] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.429 [2024-07-25 13:13:44.793730] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.429 [2024-07-25 13:13:44.793893] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x99dbc0 00:13:34.429 [2024-07-25 13:13:44.793905] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:34.429 [2024-07-25 13:13:44.794077] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9a0a10 00:13:34.429 [2024-07-25 13:13:44.794219] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x99dbc0 00:13:34.429 [2024-07-25 13:13:44.794229] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x99dbc0 00:13:34.429 [2024-07-25 13:13:44.794333] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.429 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:34.429 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:34.429 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:34.429 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:34.429 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:34.429 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:34.429 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:34.429 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:34.429 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:34.429 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:34.429 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.429 13:13:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.688 13:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:34.688 "name": "raid_bdev1", 00:13:34.688 "uuid": "f7dbece3-f513-4a62-80dc-76e740e92457", 00:13:34.688 "strip_size_kb": 0, 00:13:34.688 "state": "online", 00:13:34.688 "raid_level": "raid1", 00:13:34.688 "superblock": true, 00:13:34.688 "num_base_bdevs": 2, 00:13:34.688 "num_base_bdevs_discovered": 2, 00:13:34.688 "num_base_bdevs_operational": 2, 00:13:34.688 "base_bdevs_list": [ 00:13:34.688 { 00:13:34.688 "name": "BaseBdev1", 00:13:34.688 "uuid": "6f2ab10e-05e8-5c4c-89ea-28d2ba552c49", 00:13:34.688 "is_configured": true, 00:13:34.688 "data_offset": 2048, 00:13:34.688 "data_size": 63488 00:13:34.688 }, 00:13:34.688 { 00:13:34.688 "name": "BaseBdev2", 00:13:34.688 "uuid": "d2ed2b60-b936-586c-bb21-f0fe6aee88b2", 00:13:34.688 "is_configured": true, 00:13:34.688 "data_offset": 2048, 00:13:34.688 "data_size": 63488 00:13:34.688 } 00:13:34.688 ] 00:13:34.688 }' 00:13:34.688 13:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:34.688 13:13:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.256 13:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:13:35.256 13:13:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:35.256 [2024-07-25 13:13:45.711293] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x99ea10 00:13:36.191 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:36.451 [2024-07-25 13:13:46.825675] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:36.451 [2024-07-25 13:13:46.825736] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:36.451 [2024-07-25 13:13:46.825905] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x99ea10 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=1 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.451 13:13:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.713 13:13:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:36.713 "name": "raid_bdev1", 00:13:36.713 "uuid": "f7dbece3-f513-4a62-80dc-76e740e92457", 00:13:36.713 "strip_size_kb": 0, 00:13:36.713 "state": "online", 00:13:36.713 "raid_level": "raid1", 00:13:36.713 "superblock": true, 00:13:36.713 "num_base_bdevs": 2, 00:13:36.713 "num_base_bdevs_discovered": 1, 00:13:36.713 "num_base_bdevs_operational": 1, 00:13:36.713 "base_bdevs_list": [ 00:13:36.713 { 00:13:36.713 "name": null, 00:13:36.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.713 "is_configured": false, 00:13:36.713 "data_offset": 2048, 00:13:36.713 "data_size": 63488 00:13:36.713 }, 00:13:36.713 { 00:13:36.713 "name": "BaseBdev2", 00:13:36.713 "uuid": "d2ed2b60-b936-586c-bb21-f0fe6aee88b2", 00:13:36.713 "is_configured": true, 00:13:36.713 "data_offset": 2048, 00:13:36.713 "data_size": 63488 00:13:36.713 } 00:13:36.713 ] 00:13:36.713 }' 00:13:36.713 13:13:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:36.713 13:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.282 13:13:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:37.541 [2024-07-25 13:13:47.868397] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.541 [2024-07-25 13:13:47.868434] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.541 [2024-07-25 13:13:47.871305] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.541 [2024-07-25 13:13:47.871333] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.541 [2024-07-25 13:13:47.871386] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.541 [2024-07-25 13:13:47.871397] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x99dbc0 name raid_bdev1, state offline 00:13:37.541 0 00:13:37.541 13:13:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 849674 00:13:37.541 13:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 849674 ']' 00:13:37.541 13:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 849674 00:13:37.541 13:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:37.541 13:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.541 13:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 849674 00:13:37.541 13:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.541 13:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.541 13:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 849674' 00:13:37.541 killing process with pid 849674 00:13:37.541 13:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 849674 00:13:37.541 [2024-07-25 13:13:47.935433] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.541 13:13:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 849674 00:13:37.541 [2024-07-25 13:13:47.944228] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.801 13:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.sfnz0NvZHZ 00:13:37.801 13:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:13:37.801 13:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:13:37.801 13:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:13:37.801 13:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:13:37.801 13:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:37.801 13:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:37.801 13:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:37.801 00:13:37.801 real 0m6.202s 00:13:37.801 user 0m9.696s 00:13:37.801 sys 0m1.092s 00:13:37.801 13:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.801 13:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.801 ************************************ 00:13:37.801 END TEST raid_write_error_test 00:13:37.801 ************************************ 00:13:37.801 13:13:48 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:13:37.801 13:13:48 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:13:37.801 13:13:48 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:13:37.801 13:13:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:37.801 13:13:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.801 13:13:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.801 ************************************ 00:13:37.801 START TEST raid_state_function_test 00:13:37.801 ************************************ 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=850829 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 850829' 00:13:37.801 Process raid pid: 850829 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 850829 /var/tmp/spdk-raid.sock 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 850829 ']' 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:37.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.801 13:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.060 [2024-07-25 13:13:48.308439] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:13:38.060 [2024-07-25 13:13:48.308494] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.060 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:38.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.061 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:38.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.061 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:38.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.061 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:38.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.061 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:38.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.061 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:38.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.061 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:38.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.061 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:38.061 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.061 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:38.061 [2024-07-25 13:13:48.439336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.061 [2024-07-25 13:13:48.526207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.319 [2024-07-25 13:13:48.584108] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.319 [2024-07-25 13:13:48.584152] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.887 13:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.887 13:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:38.887 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:39.155 [2024-07-25 13:13:49.418702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.155 [2024-07-25 13:13:49.418739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.155 [2024-07-25 13:13:49.418753] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.156 [2024-07-25 13:13:49.418764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.156 [2024-07-25 13:13:49.418772] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.156 [2024-07-25 13:13:49.418782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.156 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:39.156 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:39.156 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:39.156 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:39.156 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:39.156 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:39.156 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:39.156 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:39.156 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:39.156 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:39.156 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.156 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.416 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.416 "name": "Existed_Raid", 00:13:39.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.416 "strip_size_kb": 64, 00:13:39.416 "state": "configuring", 00:13:39.416 "raid_level": "raid0", 00:13:39.416 "superblock": false, 00:13:39.416 "num_base_bdevs": 3, 00:13:39.416 "num_base_bdevs_discovered": 0, 00:13:39.416 "num_base_bdevs_operational": 3, 00:13:39.416 "base_bdevs_list": [ 00:13:39.416 { 00:13:39.416 "name": "BaseBdev1", 00:13:39.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.416 "is_configured": false, 00:13:39.416 "data_offset": 0, 00:13:39.416 "data_size": 0 00:13:39.416 }, 00:13:39.416 { 00:13:39.416 "name": "BaseBdev2", 00:13:39.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.416 "is_configured": false, 00:13:39.416 "data_offset": 0, 00:13:39.416 "data_size": 0 00:13:39.416 }, 00:13:39.416 { 00:13:39.416 "name": "BaseBdev3", 00:13:39.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.416 "is_configured": false, 00:13:39.416 "data_offset": 0, 00:13:39.416 "data_size": 0 00:13:39.416 } 00:13:39.416 ] 00:13:39.416 }' 00:13:39.416 13:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.416 13:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.985 13:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:39.985 [2024-07-25 13:13:50.433254] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.985 [2024-07-25 13:13:50.433284] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2780f40 name Existed_Raid, state configuring 00:13:39.985 13:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:40.245 [2024-07-25 13:13:50.661870] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.245 [2024-07-25 13:13:50.661898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.245 [2024-07-25 13:13:50.661907] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.245 [2024-07-25 13:13:50.661918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.245 [2024-07-25 13:13:50.661930] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.245 [2024-07-25 13:13:50.661941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.245 13:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:40.504 [2024-07-25 13:13:50.895908] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.504 BaseBdev1 00:13:40.504 13:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:40.504 13:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:40.504 13:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:40.504 13:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:40.504 13:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:40.504 13:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:40.504 13:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:40.763 13:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:41.022 [ 00:13:41.022 { 00:13:41.022 "name": "BaseBdev1", 00:13:41.022 "aliases": [ 00:13:41.022 "41021e4d-9b4c-462b-b1e3-953cb24e4e93" 00:13:41.022 ], 00:13:41.022 "product_name": "Malloc disk", 00:13:41.022 "block_size": 512, 00:13:41.022 "num_blocks": 65536, 00:13:41.022 "uuid": "41021e4d-9b4c-462b-b1e3-953cb24e4e93", 00:13:41.022 "assigned_rate_limits": { 00:13:41.022 "rw_ios_per_sec": 0, 00:13:41.022 "rw_mbytes_per_sec": 0, 00:13:41.022 "r_mbytes_per_sec": 0, 00:13:41.022 "w_mbytes_per_sec": 0 00:13:41.022 }, 00:13:41.022 "claimed": true, 00:13:41.022 "claim_type": "exclusive_write", 00:13:41.022 "zoned": false, 00:13:41.022 "supported_io_types": { 00:13:41.022 "read": true, 00:13:41.022 "write": true, 00:13:41.022 "unmap": true, 00:13:41.022 "flush": true, 00:13:41.022 "reset": true, 00:13:41.022 "nvme_admin": false, 00:13:41.022 "nvme_io": false, 00:13:41.022 "nvme_io_md": false, 00:13:41.022 "write_zeroes": true, 00:13:41.022 "zcopy": true, 00:13:41.022 "get_zone_info": false, 00:13:41.022 "zone_management": false, 00:13:41.022 "zone_append": false, 00:13:41.022 "compare": false, 00:13:41.022 "compare_and_write": false, 00:13:41.022 "abort": true, 00:13:41.022 "seek_hole": false, 00:13:41.022 "seek_data": false, 00:13:41.022 "copy": true, 00:13:41.022 "nvme_iov_md": false 00:13:41.022 }, 00:13:41.022 "memory_domains": [ 00:13:41.022 { 00:13:41.022 "dma_device_id": "system", 00:13:41.022 "dma_device_type": 1 00:13:41.022 }, 00:13:41.022 { 00:13:41.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.022 "dma_device_type": 2 00:13:41.022 } 00:13:41.022 ], 00:13:41.022 "driver_specific": {} 00:13:41.022 } 00:13:41.022 ] 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.022 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.281 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:41.281 "name": "Existed_Raid", 00:13:41.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.282 "strip_size_kb": 64, 00:13:41.282 "state": "configuring", 00:13:41.282 "raid_level": "raid0", 00:13:41.282 "superblock": false, 00:13:41.282 "num_base_bdevs": 3, 00:13:41.282 "num_base_bdevs_discovered": 1, 00:13:41.282 "num_base_bdevs_operational": 3, 00:13:41.282 "base_bdevs_list": [ 00:13:41.282 { 00:13:41.282 "name": "BaseBdev1", 00:13:41.282 "uuid": "41021e4d-9b4c-462b-b1e3-953cb24e4e93", 00:13:41.282 "is_configured": true, 00:13:41.282 "data_offset": 0, 00:13:41.282 "data_size": 65536 00:13:41.282 }, 00:13:41.282 { 00:13:41.282 "name": "BaseBdev2", 00:13:41.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.282 "is_configured": false, 00:13:41.282 "data_offset": 0, 00:13:41.282 "data_size": 0 00:13:41.282 }, 00:13:41.282 { 00:13:41.282 "name": "BaseBdev3", 00:13:41.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.282 "is_configured": false, 00:13:41.282 "data_offset": 0, 00:13:41.282 "data_size": 0 00:13:41.282 } 00:13:41.282 ] 00:13:41.282 }' 00:13:41.282 13:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:41.282 13:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.850 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:42.110 [2024-07-25 13:13:52.363779] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:42.110 [2024-07-25 13:13:52.363814] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2780810 name Existed_Raid, state configuring 00:13:42.110 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:42.110 [2024-07-25 13:13:52.592411] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.110 [2024-07-25 13:13:52.593801] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:42.110 [2024-07-25 13:13:52.593834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:42.110 [2024-07-25 13:13:52.593843] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:42.110 [2024-07-25 13:13:52.593854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:42.369 "name": "Existed_Raid", 00:13:42.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.369 "strip_size_kb": 64, 00:13:42.369 "state": "configuring", 00:13:42.369 "raid_level": "raid0", 00:13:42.369 "superblock": false, 00:13:42.369 "num_base_bdevs": 3, 00:13:42.369 "num_base_bdevs_discovered": 1, 00:13:42.369 "num_base_bdevs_operational": 3, 00:13:42.369 "base_bdevs_list": [ 00:13:42.369 { 00:13:42.369 "name": "BaseBdev1", 00:13:42.369 "uuid": "41021e4d-9b4c-462b-b1e3-953cb24e4e93", 00:13:42.369 "is_configured": true, 00:13:42.369 "data_offset": 0, 00:13:42.369 "data_size": 65536 00:13:42.369 }, 00:13:42.369 { 00:13:42.369 "name": "BaseBdev2", 00:13:42.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.369 "is_configured": false, 00:13:42.369 "data_offset": 0, 00:13:42.369 "data_size": 0 00:13:42.369 }, 00:13:42.369 { 00:13:42.369 "name": "BaseBdev3", 00:13:42.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.369 "is_configured": false, 00:13:42.369 "data_offset": 0, 00:13:42.369 "data_size": 0 00:13:42.369 } 00:13:42.369 ] 00:13:42.369 }' 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:42.369 13:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.937 13:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:43.195 [2024-07-25 13:13:53.626298] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.195 BaseBdev2 00:13:43.195 13:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:43.195 13:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:43.195 13:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:43.195 13:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:43.195 13:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:43.195 13:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:43.195 13:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:43.454 13:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:43.713 [ 00:13:43.713 { 00:13:43.713 "name": "BaseBdev2", 00:13:43.713 "aliases": [ 00:13:43.713 "010f12b3-e4e2-45d6-b9d0-7dbf48d5567d" 00:13:43.713 ], 00:13:43.713 "product_name": "Malloc disk", 00:13:43.713 "block_size": 512, 00:13:43.713 "num_blocks": 65536, 00:13:43.713 "uuid": "010f12b3-e4e2-45d6-b9d0-7dbf48d5567d", 00:13:43.713 "assigned_rate_limits": { 00:13:43.713 "rw_ios_per_sec": 0, 00:13:43.713 "rw_mbytes_per_sec": 0, 00:13:43.713 "r_mbytes_per_sec": 0, 00:13:43.713 "w_mbytes_per_sec": 0 00:13:43.713 }, 00:13:43.713 "claimed": true, 00:13:43.713 "claim_type": "exclusive_write", 00:13:43.713 "zoned": false, 00:13:43.713 "supported_io_types": { 00:13:43.713 "read": true, 00:13:43.713 "write": true, 00:13:43.713 "unmap": true, 00:13:43.713 "flush": true, 00:13:43.713 "reset": true, 00:13:43.713 "nvme_admin": false, 00:13:43.713 "nvme_io": false, 00:13:43.713 "nvme_io_md": false, 00:13:43.713 "write_zeroes": true, 00:13:43.713 "zcopy": true, 00:13:43.713 "get_zone_info": false, 00:13:43.713 "zone_management": false, 00:13:43.713 "zone_append": false, 00:13:43.713 "compare": false, 00:13:43.713 "compare_and_write": false, 00:13:43.713 "abort": true, 00:13:43.713 "seek_hole": false, 00:13:43.713 "seek_data": false, 00:13:43.713 "copy": true, 00:13:43.713 "nvme_iov_md": false 00:13:43.713 }, 00:13:43.713 "memory_domains": [ 00:13:43.713 { 00:13:43.713 "dma_device_id": "system", 00:13:43.713 "dma_device_type": 1 00:13:43.713 }, 00:13:43.713 { 00:13:43.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.713 "dma_device_type": 2 00:13:43.713 } 00:13:43.713 ], 00:13:43.713 "driver_specific": {} 00:13:43.713 } 00:13:43.713 ] 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.713 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.971 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:43.971 "name": "Existed_Raid", 00:13:43.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.971 "strip_size_kb": 64, 00:13:43.971 "state": "configuring", 00:13:43.971 "raid_level": "raid0", 00:13:43.971 "superblock": false, 00:13:43.971 "num_base_bdevs": 3, 00:13:43.971 "num_base_bdevs_discovered": 2, 00:13:43.971 "num_base_bdevs_operational": 3, 00:13:43.971 "base_bdevs_list": [ 00:13:43.971 { 00:13:43.971 "name": "BaseBdev1", 00:13:43.971 "uuid": "41021e4d-9b4c-462b-b1e3-953cb24e4e93", 00:13:43.971 "is_configured": true, 00:13:43.971 "data_offset": 0, 00:13:43.971 "data_size": 65536 00:13:43.971 }, 00:13:43.971 { 00:13:43.971 "name": "BaseBdev2", 00:13:43.971 "uuid": "010f12b3-e4e2-45d6-b9d0-7dbf48d5567d", 00:13:43.971 "is_configured": true, 00:13:43.971 "data_offset": 0, 00:13:43.971 "data_size": 65536 00:13:43.971 }, 00:13:43.971 { 00:13:43.971 "name": "BaseBdev3", 00:13:43.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.971 "is_configured": false, 00:13:43.971 "data_offset": 0, 00:13:43.971 "data_size": 0 00:13:43.971 } 00:13:43.971 ] 00:13:43.971 }' 00:13:43.971 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:43.971 13:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.538 13:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:44.797 [2024-07-25 13:13:55.109353] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.797 [2024-07-25 13:13:55.109382] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2781710 00:13:44.797 [2024-07-25 13:13:55.109390] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:44.797 [2024-07-25 13:13:55.109567] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27792e0 00:13:44.797 [2024-07-25 13:13:55.109674] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2781710 00:13:44.797 [2024-07-25 13:13:55.109683] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2781710 00:13:44.797 [2024-07-25 13:13:55.109832] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.797 BaseBdev3 00:13:44.797 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:44.797 13:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:44.797 13:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:44.797 13:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:44.797 13:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:44.797 13:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:44.797 13:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:45.056 13:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.316 [ 00:13:45.316 { 00:13:45.316 "name": "BaseBdev3", 00:13:45.316 "aliases": [ 00:13:45.316 "fe138095-5bf7-48a0-9f7e-0ad341b8ff32" 00:13:45.316 ], 00:13:45.316 "product_name": "Malloc disk", 00:13:45.316 "block_size": 512, 00:13:45.316 "num_blocks": 65536, 00:13:45.316 "uuid": "fe138095-5bf7-48a0-9f7e-0ad341b8ff32", 00:13:45.316 "assigned_rate_limits": { 00:13:45.316 "rw_ios_per_sec": 0, 00:13:45.316 "rw_mbytes_per_sec": 0, 00:13:45.316 "r_mbytes_per_sec": 0, 00:13:45.316 "w_mbytes_per_sec": 0 00:13:45.316 }, 00:13:45.316 "claimed": true, 00:13:45.316 "claim_type": "exclusive_write", 00:13:45.316 "zoned": false, 00:13:45.316 "supported_io_types": { 00:13:45.316 "read": true, 00:13:45.316 "write": true, 00:13:45.316 "unmap": true, 00:13:45.316 "flush": true, 00:13:45.316 "reset": true, 00:13:45.316 "nvme_admin": false, 00:13:45.316 "nvme_io": false, 00:13:45.316 "nvme_io_md": false, 00:13:45.316 "write_zeroes": true, 00:13:45.316 "zcopy": true, 00:13:45.316 "get_zone_info": false, 00:13:45.316 "zone_management": false, 00:13:45.316 "zone_append": false, 00:13:45.316 "compare": false, 00:13:45.316 "compare_and_write": false, 00:13:45.316 "abort": true, 00:13:45.316 "seek_hole": false, 00:13:45.316 "seek_data": false, 00:13:45.316 "copy": true, 00:13:45.316 "nvme_iov_md": false 00:13:45.316 }, 00:13:45.316 "memory_domains": [ 00:13:45.316 { 00:13:45.316 "dma_device_id": "system", 00:13:45.316 "dma_device_type": 1 00:13:45.316 }, 00:13:45.316 { 00:13:45.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.316 "dma_device_type": 2 00:13:45.316 } 00:13:45.316 ], 00:13:45.316 "driver_specific": {} 00:13:45.316 } 00:13:45.316 ] 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.316 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.576 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.576 "name": "Existed_Raid", 00:13:45.576 "uuid": "0ab30436-36bc-4396-9d57-5496e18ec42d", 00:13:45.576 "strip_size_kb": 64, 00:13:45.576 "state": "online", 00:13:45.576 "raid_level": "raid0", 00:13:45.576 "superblock": false, 00:13:45.576 "num_base_bdevs": 3, 00:13:45.576 "num_base_bdevs_discovered": 3, 00:13:45.576 "num_base_bdevs_operational": 3, 00:13:45.576 "base_bdevs_list": [ 00:13:45.576 { 00:13:45.576 "name": "BaseBdev1", 00:13:45.576 "uuid": "41021e4d-9b4c-462b-b1e3-953cb24e4e93", 00:13:45.576 "is_configured": true, 00:13:45.576 "data_offset": 0, 00:13:45.576 "data_size": 65536 00:13:45.576 }, 00:13:45.576 { 00:13:45.576 "name": "BaseBdev2", 00:13:45.576 "uuid": "010f12b3-e4e2-45d6-b9d0-7dbf48d5567d", 00:13:45.576 "is_configured": true, 00:13:45.576 "data_offset": 0, 00:13:45.576 "data_size": 65536 00:13:45.576 }, 00:13:45.576 { 00:13:45.576 "name": "BaseBdev3", 00:13:45.576 "uuid": "fe138095-5bf7-48a0-9f7e-0ad341b8ff32", 00:13:45.576 "is_configured": true, 00:13:45.576 "data_offset": 0, 00:13:45.576 "data_size": 65536 00:13:45.576 } 00:13:45.576 ] 00:13:45.576 }' 00:13:45.576 13:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.576 13:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.145 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:46.145 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:46.145 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:46.145 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:46.145 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:46.145 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:46.145 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:46.145 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:46.145 [2024-07-25 13:13:56.597549] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.145 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:46.145 "name": "Existed_Raid", 00:13:46.145 "aliases": [ 00:13:46.145 "0ab30436-36bc-4396-9d57-5496e18ec42d" 00:13:46.145 ], 00:13:46.145 "product_name": "Raid Volume", 00:13:46.145 "block_size": 512, 00:13:46.145 "num_blocks": 196608, 00:13:46.145 "uuid": "0ab30436-36bc-4396-9d57-5496e18ec42d", 00:13:46.145 "assigned_rate_limits": { 00:13:46.145 "rw_ios_per_sec": 0, 00:13:46.145 "rw_mbytes_per_sec": 0, 00:13:46.145 "r_mbytes_per_sec": 0, 00:13:46.146 "w_mbytes_per_sec": 0 00:13:46.146 }, 00:13:46.146 "claimed": false, 00:13:46.146 "zoned": false, 00:13:46.146 "supported_io_types": { 00:13:46.146 "read": true, 00:13:46.146 "write": true, 00:13:46.146 "unmap": true, 00:13:46.146 "flush": true, 00:13:46.146 "reset": true, 00:13:46.146 "nvme_admin": false, 00:13:46.146 "nvme_io": false, 00:13:46.146 "nvme_io_md": false, 00:13:46.146 "write_zeroes": true, 00:13:46.146 "zcopy": false, 00:13:46.146 "get_zone_info": false, 00:13:46.146 "zone_management": false, 00:13:46.146 "zone_append": false, 00:13:46.146 "compare": false, 00:13:46.146 "compare_and_write": false, 00:13:46.146 "abort": false, 00:13:46.146 "seek_hole": false, 00:13:46.146 "seek_data": false, 00:13:46.146 "copy": false, 00:13:46.146 "nvme_iov_md": false 00:13:46.146 }, 00:13:46.146 "memory_domains": [ 00:13:46.146 { 00:13:46.146 "dma_device_id": "system", 00:13:46.146 "dma_device_type": 1 00:13:46.146 }, 00:13:46.146 { 00:13:46.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.146 "dma_device_type": 2 00:13:46.146 }, 00:13:46.146 { 00:13:46.146 "dma_device_id": "system", 00:13:46.146 "dma_device_type": 1 00:13:46.146 }, 00:13:46.146 { 00:13:46.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.146 "dma_device_type": 2 00:13:46.146 }, 00:13:46.146 { 00:13:46.146 "dma_device_id": "system", 00:13:46.146 "dma_device_type": 1 00:13:46.146 }, 00:13:46.146 { 00:13:46.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.146 "dma_device_type": 2 00:13:46.146 } 00:13:46.146 ], 00:13:46.146 "driver_specific": { 00:13:46.146 "raid": { 00:13:46.146 "uuid": "0ab30436-36bc-4396-9d57-5496e18ec42d", 00:13:46.146 "strip_size_kb": 64, 00:13:46.146 "state": "online", 00:13:46.146 "raid_level": "raid0", 00:13:46.146 "superblock": false, 00:13:46.146 "num_base_bdevs": 3, 00:13:46.146 "num_base_bdevs_discovered": 3, 00:13:46.146 "num_base_bdevs_operational": 3, 00:13:46.146 "base_bdevs_list": [ 00:13:46.146 { 00:13:46.146 "name": "BaseBdev1", 00:13:46.146 "uuid": "41021e4d-9b4c-462b-b1e3-953cb24e4e93", 00:13:46.146 "is_configured": true, 00:13:46.146 "data_offset": 0, 00:13:46.146 "data_size": 65536 00:13:46.146 }, 00:13:46.146 { 00:13:46.146 "name": "BaseBdev2", 00:13:46.146 "uuid": "010f12b3-e4e2-45d6-b9d0-7dbf48d5567d", 00:13:46.146 "is_configured": true, 00:13:46.146 "data_offset": 0, 00:13:46.146 "data_size": 65536 00:13:46.146 }, 00:13:46.146 { 00:13:46.146 "name": "BaseBdev3", 00:13:46.146 "uuid": "fe138095-5bf7-48a0-9f7e-0ad341b8ff32", 00:13:46.146 "is_configured": true, 00:13:46.146 "data_offset": 0, 00:13:46.146 "data_size": 65536 00:13:46.146 } 00:13:46.146 ] 00:13:46.146 } 00:13:46.146 } 00:13:46.146 }' 00:13:46.146 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.405 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:46.405 BaseBdev2 00:13:46.405 BaseBdev3' 00:13:46.405 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:46.405 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:46.405 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:46.664 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:46.664 "name": "BaseBdev1", 00:13:46.664 "aliases": [ 00:13:46.664 "41021e4d-9b4c-462b-b1e3-953cb24e4e93" 00:13:46.664 ], 00:13:46.664 "product_name": "Malloc disk", 00:13:46.664 "block_size": 512, 00:13:46.664 "num_blocks": 65536, 00:13:46.664 "uuid": "41021e4d-9b4c-462b-b1e3-953cb24e4e93", 00:13:46.664 "assigned_rate_limits": { 00:13:46.664 "rw_ios_per_sec": 0, 00:13:46.664 "rw_mbytes_per_sec": 0, 00:13:46.664 "r_mbytes_per_sec": 0, 00:13:46.664 "w_mbytes_per_sec": 0 00:13:46.664 }, 00:13:46.664 "claimed": true, 00:13:46.664 "claim_type": "exclusive_write", 00:13:46.664 "zoned": false, 00:13:46.664 "supported_io_types": { 00:13:46.664 "read": true, 00:13:46.664 "write": true, 00:13:46.664 "unmap": true, 00:13:46.664 "flush": true, 00:13:46.664 "reset": true, 00:13:46.664 "nvme_admin": false, 00:13:46.664 "nvme_io": false, 00:13:46.664 "nvme_io_md": false, 00:13:46.664 "write_zeroes": true, 00:13:46.664 "zcopy": true, 00:13:46.664 "get_zone_info": false, 00:13:46.664 "zone_management": false, 00:13:46.664 "zone_append": false, 00:13:46.664 "compare": false, 00:13:46.664 "compare_and_write": false, 00:13:46.664 "abort": true, 00:13:46.664 "seek_hole": false, 00:13:46.664 "seek_data": false, 00:13:46.664 "copy": true, 00:13:46.664 "nvme_iov_md": false 00:13:46.664 }, 00:13:46.664 "memory_domains": [ 00:13:46.664 { 00:13:46.664 "dma_device_id": "system", 00:13:46.664 "dma_device_type": 1 00:13:46.664 }, 00:13:46.664 { 00:13:46.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.664 "dma_device_type": 2 00:13:46.664 } 00:13:46.664 ], 00:13:46.664 "driver_specific": {} 00:13:46.664 }' 00:13:46.664 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.664 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.664 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:46.664 13:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.664 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.664 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:46.664 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:46.664 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:46.664 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:46.664 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:46.923 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:46.923 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:46.923 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:46.924 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:46.924 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:47.183 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:47.183 "name": "BaseBdev2", 00:13:47.183 "aliases": [ 00:13:47.183 "010f12b3-e4e2-45d6-b9d0-7dbf48d5567d" 00:13:47.183 ], 00:13:47.183 "product_name": "Malloc disk", 00:13:47.183 "block_size": 512, 00:13:47.183 "num_blocks": 65536, 00:13:47.183 "uuid": "010f12b3-e4e2-45d6-b9d0-7dbf48d5567d", 00:13:47.183 "assigned_rate_limits": { 00:13:47.183 "rw_ios_per_sec": 0, 00:13:47.183 "rw_mbytes_per_sec": 0, 00:13:47.183 "r_mbytes_per_sec": 0, 00:13:47.183 "w_mbytes_per_sec": 0 00:13:47.183 }, 00:13:47.183 "claimed": true, 00:13:47.183 "claim_type": "exclusive_write", 00:13:47.183 "zoned": false, 00:13:47.183 "supported_io_types": { 00:13:47.183 "read": true, 00:13:47.183 "write": true, 00:13:47.183 "unmap": true, 00:13:47.183 "flush": true, 00:13:47.183 "reset": true, 00:13:47.183 "nvme_admin": false, 00:13:47.183 "nvme_io": false, 00:13:47.183 "nvme_io_md": false, 00:13:47.183 "write_zeroes": true, 00:13:47.183 "zcopy": true, 00:13:47.183 "get_zone_info": false, 00:13:47.184 "zone_management": false, 00:13:47.184 "zone_append": false, 00:13:47.184 "compare": false, 00:13:47.184 "compare_and_write": false, 00:13:47.184 "abort": true, 00:13:47.184 "seek_hole": false, 00:13:47.184 "seek_data": false, 00:13:47.184 "copy": true, 00:13:47.184 "nvme_iov_md": false 00:13:47.184 }, 00:13:47.184 "memory_domains": [ 00:13:47.184 { 00:13:47.184 "dma_device_id": "system", 00:13:47.184 "dma_device_type": 1 00:13:47.184 }, 00:13:47.184 { 00:13:47.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.184 "dma_device_type": 2 00:13:47.184 } 00:13:47.184 ], 00:13:47.184 "driver_specific": {} 00:13:47.184 }' 00:13:47.184 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:47.184 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:47.184 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:47.184 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:47.184 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:47.184 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:47.184 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:47.184 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:47.442 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:47.442 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:47.442 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:47.442 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:47.442 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:47.442 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:47.442 13:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:47.700 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:47.700 "name": "BaseBdev3", 00:13:47.700 "aliases": [ 00:13:47.700 "fe138095-5bf7-48a0-9f7e-0ad341b8ff32" 00:13:47.700 ], 00:13:47.700 "product_name": "Malloc disk", 00:13:47.700 "block_size": 512, 00:13:47.700 "num_blocks": 65536, 00:13:47.700 "uuid": "fe138095-5bf7-48a0-9f7e-0ad341b8ff32", 00:13:47.700 "assigned_rate_limits": { 00:13:47.700 "rw_ios_per_sec": 0, 00:13:47.700 "rw_mbytes_per_sec": 0, 00:13:47.700 "r_mbytes_per_sec": 0, 00:13:47.700 "w_mbytes_per_sec": 0 00:13:47.700 }, 00:13:47.700 "claimed": true, 00:13:47.700 "claim_type": "exclusive_write", 00:13:47.700 "zoned": false, 00:13:47.700 "supported_io_types": { 00:13:47.700 "read": true, 00:13:47.700 "write": true, 00:13:47.700 "unmap": true, 00:13:47.700 "flush": true, 00:13:47.700 "reset": true, 00:13:47.700 "nvme_admin": false, 00:13:47.700 "nvme_io": false, 00:13:47.700 "nvme_io_md": false, 00:13:47.700 "write_zeroes": true, 00:13:47.700 "zcopy": true, 00:13:47.700 "get_zone_info": false, 00:13:47.700 "zone_management": false, 00:13:47.700 "zone_append": false, 00:13:47.700 "compare": false, 00:13:47.700 "compare_and_write": false, 00:13:47.700 "abort": true, 00:13:47.700 "seek_hole": false, 00:13:47.700 "seek_data": false, 00:13:47.701 "copy": true, 00:13:47.701 "nvme_iov_md": false 00:13:47.701 }, 00:13:47.701 "memory_domains": [ 00:13:47.701 { 00:13:47.701 "dma_device_id": "system", 00:13:47.701 "dma_device_type": 1 00:13:47.701 }, 00:13:47.701 { 00:13:47.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.701 "dma_device_type": 2 00:13:47.701 } 00:13:47.701 ], 00:13:47.701 "driver_specific": {} 00:13:47.701 }' 00:13:47.701 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:47.701 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:47.701 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:47.701 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:47.701 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:47.701 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:47.959 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:47.959 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:47.959 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:47.959 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:47.960 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:47.960 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:47.960 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:48.220 [2024-07-25 13:13:58.574719] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.220 [2024-07-25 13:13:58.574746] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.220 [2024-07-25 13:13:58.574782] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.220 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.480 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.480 "name": "Existed_Raid", 00:13:48.480 "uuid": "0ab30436-36bc-4396-9d57-5496e18ec42d", 00:13:48.480 "strip_size_kb": 64, 00:13:48.480 "state": "offline", 00:13:48.480 "raid_level": "raid0", 00:13:48.480 "superblock": false, 00:13:48.480 "num_base_bdevs": 3, 00:13:48.480 "num_base_bdevs_discovered": 2, 00:13:48.480 "num_base_bdevs_operational": 2, 00:13:48.480 "base_bdevs_list": [ 00:13:48.480 { 00:13:48.480 "name": null, 00:13:48.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.480 "is_configured": false, 00:13:48.480 "data_offset": 0, 00:13:48.480 "data_size": 65536 00:13:48.480 }, 00:13:48.480 { 00:13:48.480 "name": "BaseBdev2", 00:13:48.480 "uuid": "010f12b3-e4e2-45d6-b9d0-7dbf48d5567d", 00:13:48.480 "is_configured": true, 00:13:48.480 "data_offset": 0, 00:13:48.480 "data_size": 65536 00:13:48.480 }, 00:13:48.480 { 00:13:48.480 "name": "BaseBdev3", 00:13:48.480 "uuid": "fe138095-5bf7-48a0-9f7e-0ad341b8ff32", 00:13:48.480 "is_configured": true, 00:13:48.480 "data_offset": 0, 00:13:48.480 "data_size": 65536 00:13:48.480 } 00:13:48.480 ] 00:13:48.480 }' 00:13:48.480 13:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.480 13:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.048 13:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:49.048 13:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:49.048 13:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.048 13:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:49.307 13:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:49.307 13:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.307 13:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:49.307 [2024-07-25 13:13:59.783037] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:49.566 13:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:49.566 13:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:49.566 13:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.566 13:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:49.566 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:49.566 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.566 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:49.825 [2024-07-25 13:14:00.250188] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:49.825 [2024-07-25 13:14:00.250230] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2781710 name Existed_Raid, state offline 00:13:49.825 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:49.825 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:49.825 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.825 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:50.084 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:50.084 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:50.084 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:50.084 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:50.084 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:50.084 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:50.343 BaseBdev2 00:13:50.343 13:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:50.343 13:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:50.343 13:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:50.343 13:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:50.343 13:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:50.343 13:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:50.343 13:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:50.603 13:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:50.892 [ 00:13:50.892 { 00:13:50.892 "name": "BaseBdev2", 00:13:50.892 "aliases": [ 00:13:50.892 "4c82a6be-cc80-4521-a724-67c4649f5ab8" 00:13:50.892 ], 00:13:50.893 "product_name": "Malloc disk", 00:13:50.893 "block_size": 512, 00:13:50.893 "num_blocks": 65536, 00:13:50.893 "uuid": "4c82a6be-cc80-4521-a724-67c4649f5ab8", 00:13:50.893 "assigned_rate_limits": { 00:13:50.893 "rw_ios_per_sec": 0, 00:13:50.893 "rw_mbytes_per_sec": 0, 00:13:50.893 "r_mbytes_per_sec": 0, 00:13:50.893 "w_mbytes_per_sec": 0 00:13:50.893 }, 00:13:50.893 "claimed": false, 00:13:50.893 "zoned": false, 00:13:50.893 "supported_io_types": { 00:13:50.893 "read": true, 00:13:50.893 "write": true, 00:13:50.893 "unmap": true, 00:13:50.893 "flush": true, 00:13:50.893 "reset": true, 00:13:50.893 "nvme_admin": false, 00:13:50.893 "nvme_io": false, 00:13:50.893 "nvme_io_md": false, 00:13:50.893 "write_zeroes": true, 00:13:50.893 "zcopy": true, 00:13:50.893 "get_zone_info": false, 00:13:50.893 "zone_management": false, 00:13:50.893 "zone_append": false, 00:13:50.893 "compare": false, 00:13:50.893 "compare_and_write": false, 00:13:50.893 "abort": true, 00:13:50.893 "seek_hole": false, 00:13:50.893 "seek_data": false, 00:13:50.893 "copy": true, 00:13:50.893 "nvme_iov_md": false 00:13:50.893 }, 00:13:50.893 "memory_domains": [ 00:13:50.893 { 00:13:50.893 "dma_device_id": "system", 00:13:50.893 "dma_device_type": 1 00:13:50.893 }, 00:13:50.893 { 00:13:50.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.893 "dma_device_type": 2 00:13:50.893 } 00:13:50.893 ], 00:13:50.893 "driver_specific": {} 00:13:50.893 } 00:13:50.893 ] 00:13:50.893 13:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:50.893 13:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:50.893 13:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:50.893 13:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:50.893 BaseBdev3 00:13:51.151 13:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:51.151 13:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:51.151 13:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:51.151 13:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:51.151 13:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:51.151 13:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:51.151 13:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:51.151 13:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:51.410 [ 00:13:51.410 { 00:13:51.410 "name": "BaseBdev3", 00:13:51.410 "aliases": [ 00:13:51.410 "f668cc02-b4fd-4df4-ae9e-89fe3401d5da" 00:13:51.410 ], 00:13:51.410 "product_name": "Malloc disk", 00:13:51.410 "block_size": 512, 00:13:51.410 "num_blocks": 65536, 00:13:51.410 "uuid": "f668cc02-b4fd-4df4-ae9e-89fe3401d5da", 00:13:51.410 "assigned_rate_limits": { 00:13:51.410 "rw_ios_per_sec": 0, 00:13:51.410 "rw_mbytes_per_sec": 0, 00:13:51.410 "r_mbytes_per_sec": 0, 00:13:51.410 "w_mbytes_per_sec": 0 00:13:51.410 }, 00:13:51.410 "claimed": false, 00:13:51.410 "zoned": false, 00:13:51.410 "supported_io_types": { 00:13:51.410 "read": true, 00:13:51.410 "write": true, 00:13:51.410 "unmap": true, 00:13:51.410 "flush": true, 00:13:51.410 "reset": true, 00:13:51.410 "nvme_admin": false, 00:13:51.410 "nvme_io": false, 00:13:51.410 "nvme_io_md": false, 00:13:51.410 "write_zeroes": true, 00:13:51.410 "zcopy": true, 00:13:51.410 "get_zone_info": false, 00:13:51.410 "zone_management": false, 00:13:51.410 "zone_append": false, 00:13:51.410 "compare": false, 00:13:51.410 "compare_and_write": false, 00:13:51.410 "abort": true, 00:13:51.410 "seek_hole": false, 00:13:51.410 "seek_data": false, 00:13:51.410 "copy": true, 00:13:51.410 "nvme_iov_md": false 00:13:51.410 }, 00:13:51.410 "memory_domains": [ 00:13:51.410 { 00:13:51.410 "dma_device_id": "system", 00:13:51.410 "dma_device_type": 1 00:13:51.410 }, 00:13:51.410 { 00:13:51.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.410 "dma_device_type": 2 00:13:51.410 } 00:13:51.410 ], 00:13:51.410 "driver_specific": {} 00:13:51.410 } 00:13:51.410 ] 00:13:51.410 13:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:51.410 13:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:51.410 13:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:51.410 13:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:51.669 [2024-07-25 13:14:02.057388] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.669 [2024-07-25 13:14:02.057425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.669 [2024-07-25 13:14:02.057441] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.669 [2024-07-25 13:14:02.058680] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.669 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:51.669 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:51.669 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:51.669 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:51.669 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:51.669 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:51.669 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:51.669 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:51.669 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:51.669 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:51.669 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.669 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.929 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:51.929 "name": "Existed_Raid", 00:13:51.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.929 "strip_size_kb": 64, 00:13:51.929 "state": "configuring", 00:13:51.929 "raid_level": "raid0", 00:13:51.929 "superblock": false, 00:13:51.929 "num_base_bdevs": 3, 00:13:51.929 "num_base_bdevs_discovered": 2, 00:13:51.929 "num_base_bdevs_operational": 3, 00:13:51.929 "base_bdevs_list": [ 00:13:51.929 { 00:13:51.929 "name": "BaseBdev1", 00:13:51.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.929 "is_configured": false, 00:13:51.929 "data_offset": 0, 00:13:51.929 "data_size": 0 00:13:51.929 }, 00:13:51.929 { 00:13:51.929 "name": "BaseBdev2", 00:13:51.929 "uuid": "4c82a6be-cc80-4521-a724-67c4649f5ab8", 00:13:51.929 "is_configured": true, 00:13:51.929 "data_offset": 0, 00:13:51.929 "data_size": 65536 00:13:51.929 }, 00:13:51.929 { 00:13:51.929 "name": "BaseBdev3", 00:13:51.929 "uuid": "f668cc02-b4fd-4df4-ae9e-89fe3401d5da", 00:13:51.929 "is_configured": true, 00:13:51.929 "data_offset": 0, 00:13:51.929 "data_size": 65536 00:13:51.929 } 00:13:51.929 ] 00:13:51.929 }' 00:13:51.929 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:51.929 13:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.495 13:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:52.754 [2024-07-25 13:14:03.064033] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:52.754 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:52.754 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:52.754 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:52.754 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:52.754 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:52.754 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:52.754 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:52.754 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:52.754 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:52.754 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:52.754 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.754 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.013 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:53.013 "name": "Existed_Raid", 00:13:53.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.013 "strip_size_kb": 64, 00:13:53.013 "state": "configuring", 00:13:53.013 "raid_level": "raid0", 00:13:53.013 "superblock": false, 00:13:53.013 "num_base_bdevs": 3, 00:13:53.013 "num_base_bdevs_discovered": 1, 00:13:53.013 "num_base_bdevs_operational": 3, 00:13:53.013 "base_bdevs_list": [ 00:13:53.013 { 00:13:53.013 "name": "BaseBdev1", 00:13:53.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.013 "is_configured": false, 00:13:53.013 "data_offset": 0, 00:13:53.013 "data_size": 0 00:13:53.013 }, 00:13:53.013 { 00:13:53.013 "name": null, 00:13:53.013 "uuid": "4c82a6be-cc80-4521-a724-67c4649f5ab8", 00:13:53.013 "is_configured": false, 00:13:53.013 "data_offset": 0, 00:13:53.013 "data_size": 65536 00:13:53.013 }, 00:13:53.013 { 00:13:53.013 "name": "BaseBdev3", 00:13:53.013 "uuid": "f668cc02-b4fd-4df4-ae9e-89fe3401d5da", 00:13:53.013 "is_configured": true, 00:13:53.013 "data_offset": 0, 00:13:53.013 "data_size": 65536 00:13:53.014 } 00:13:53.014 ] 00:13:53.014 }' 00:13:53.014 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:53.014 13:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.582 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.582 13:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:53.841 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:53.841 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.099 [2024-07-25 13:14:04.342634] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.099 BaseBdev1 00:13:54.099 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:54.099 13:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:54.099 13:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:54.099 13:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:54.099 13:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:54.099 13:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:54.099 13:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:54.099 13:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:54.359 [ 00:13:54.359 { 00:13:54.359 "name": "BaseBdev1", 00:13:54.359 "aliases": [ 00:13:54.359 "309deabc-4102-410b-84c1-e2381fa9c574" 00:13:54.359 ], 00:13:54.359 "product_name": "Malloc disk", 00:13:54.359 "block_size": 512, 00:13:54.359 "num_blocks": 65536, 00:13:54.359 "uuid": "309deabc-4102-410b-84c1-e2381fa9c574", 00:13:54.359 "assigned_rate_limits": { 00:13:54.359 "rw_ios_per_sec": 0, 00:13:54.359 "rw_mbytes_per_sec": 0, 00:13:54.359 "r_mbytes_per_sec": 0, 00:13:54.359 "w_mbytes_per_sec": 0 00:13:54.359 }, 00:13:54.359 "claimed": true, 00:13:54.359 "claim_type": "exclusive_write", 00:13:54.359 "zoned": false, 00:13:54.359 "supported_io_types": { 00:13:54.359 "read": true, 00:13:54.359 "write": true, 00:13:54.359 "unmap": true, 00:13:54.359 "flush": true, 00:13:54.359 "reset": true, 00:13:54.359 "nvme_admin": false, 00:13:54.359 "nvme_io": false, 00:13:54.359 "nvme_io_md": false, 00:13:54.359 "write_zeroes": true, 00:13:54.359 "zcopy": true, 00:13:54.359 "get_zone_info": false, 00:13:54.359 "zone_management": false, 00:13:54.359 "zone_append": false, 00:13:54.359 "compare": false, 00:13:54.359 "compare_and_write": false, 00:13:54.359 "abort": true, 00:13:54.359 "seek_hole": false, 00:13:54.359 "seek_data": false, 00:13:54.359 "copy": true, 00:13:54.359 "nvme_iov_md": false 00:13:54.359 }, 00:13:54.359 "memory_domains": [ 00:13:54.359 { 00:13:54.359 "dma_device_id": "system", 00:13:54.359 "dma_device_type": 1 00:13:54.359 }, 00:13:54.359 { 00:13:54.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.359 "dma_device_type": 2 00:13:54.359 } 00:13:54.359 ], 00:13:54.359 "driver_specific": {} 00:13:54.359 } 00:13:54.359 ] 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.359 13:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.617 13:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:54.617 "name": "Existed_Raid", 00:13:54.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.617 "strip_size_kb": 64, 00:13:54.617 "state": "configuring", 00:13:54.617 "raid_level": "raid0", 00:13:54.617 "superblock": false, 00:13:54.617 "num_base_bdevs": 3, 00:13:54.617 "num_base_bdevs_discovered": 2, 00:13:54.617 "num_base_bdevs_operational": 3, 00:13:54.617 "base_bdevs_list": [ 00:13:54.617 { 00:13:54.617 "name": "BaseBdev1", 00:13:54.617 "uuid": "309deabc-4102-410b-84c1-e2381fa9c574", 00:13:54.617 "is_configured": true, 00:13:54.617 "data_offset": 0, 00:13:54.617 "data_size": 65536 00:13:54.617 }, 00:13:54.617 { 00:13:54.617 "name": null, 00:13:54.617 "uuid": "4c82a6be-cc80-4521-a724-67c4649f5ab8", 00:13:54.617 "is_configured": false, 00:13:54.617 "data_offset": 0, 00:13:54.617 "data_size": 65536 00:13:54.617 }, 00:13:54.617 { 00:13:54.617 "name": "BaseBdev3", 00:13:54.617 "uuid": "f668cc02-b4fd-4df4-ae9e-89fe3401d5da", 00:13:54.617 "is_configured": true, 00:13:54.617 "data_offset": 0, 00:13:54.617 "data_size": 65536 00:13:54.617 } 00:13:54.617 ] 00:13:54.617 }' 00:13:54.617 13:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:54.617 13:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.184 13:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:55.184 13:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.444 13:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:55.444 13:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:55.703 [2024-07-25 13:14:06.047156] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:55.703 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:55.703 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:55.703 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:55.703 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:55.703 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:55.703 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:55.703 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:55.703 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:55.703 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:55.703 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:55.703 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.703 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.961 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:55.961 "name": "Existed_Raid", 00:13:55.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.961 "strip_size_kb": 64, 00:13:55.961 "state": "configuring", 00:13:55.961 "raid_level": "raid0", 00:13:55.961 "superblock": false, 00:13:55.961 "num_base_bdevs": 3, 00:13:55.961 "num_base_bdevs_discovered": 1, 00:13:55.961 "num_base_bdevs_operational": 3, 00:13:55.961 "base_bdevs_list": [ 00:13:55.961 { 00:13:55.961 "name": "BaseBdev1", 00:13:55.961 "uuid": "309deabc-4102-410b-84c1-e2381fa9c574", 00:13:55.961 "is_configured": true, 00:13:55.961 "data_offset": 0, 00:13:55.961 "data_size": 65536 00:13:55.961 }, 00:13:55.961 { 00:13:55.961 "name": null, 00:13:55.962 "uuid": "4c82a6be-cc80-4521-a724-67c4649f5ab8", 00:13:55.962 "is_configured": false, 00:13:55.962 "data_offset": 0, 00:13:55.962 "data_size": 65536 00:13:55.962 }, 00:13:55.962 { 00:13:55.962 "name": null, 00:13:55.962 "uuid": "f668cc02-b4fd-4df4-ae9e-89fe3401d5da", 00:13:55.962 "is_configured": false, 00:13:55.962 "data_offset": 0, 00:13:55.962 "data_size": 65536 00:13:55.962 } 00:13:55.962 ] 00:13:55.962 }' 00:13:55.962 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:55.962 13:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.528 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.528 13:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:56.787 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:56.787 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:57.046 [2024-07-25 13:14:07.310500] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.047 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:57.047 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:57.047 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:57.047 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:57.047 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:57.047 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:57.047 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.047 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.047 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.047 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.047 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.047 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.306 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:57.306 "name": "Existed_Raid", 00:13:57.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.306 "strip_size_kb": 64, 00:13:57.306 "state": "configuring", 00:13:57.306 "raid_level": "raid0", 00:13:57.306 "superblock": false, 00:13:57.306 "num_base_bdevs": 3, 00:13:57.306 "num_base_bdevs_discovered": 2, 00:13:57.306 "num_base_bdevs_operational": 3, 00:13:57.306 "base_bdevs_list": [ 00:13:57.306 { 00:13:57.306 "name": "BaseBdev1", 00:13:57.306 "uuid": "309deabc-4102-410b-84c1-e2381fa9c574", 00:13:57.307 "is_configured": true, 00:13:57.307 "data_offset": 0, 00:13:57.307 "data_size": 65536 00:13:57.307 }, 00:13:57.307 { 00:13:57.307 "name": null, 00:13:57.307 "uuid": "4c82a6be-cc80-4521-a724-67c4649f5ab8", 00:13:57.307 "is_configured": false, 00:13:57.307 "data_offset": 0, 00:13:57.307 "data_size": 65536 00:13:57.307 }, 00:13:57.307 { 00:13:57.307 "name": "BaseBdev3", 00:13:57.307 "uuid": "f668cc02-b4fd-4df4-ae9e-89fe3401d5da", 00:13:57.307 "is_configured": true, 00:13:57.307 "data_offset": 0, 00:13:57.307 "data_size": 65536 00:13:57.307 } 00:13:57.307 ] 00:13:57.307 }' 00:13:57.307 13:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:57.307 13:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.872 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.872 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.872 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:57.872 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:58.130 [2024-07-25 13:14:08.549790] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.130 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:58.130 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:58.130 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:58.130 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:58.130 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:58.130 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:58.130 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.130 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.130 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.130 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.130 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.130 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.388 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.389 "name": "Existed_Raid", 00:13:58.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.389 "strip_size_kb": 64, 00:13:58.389 "state": "configuring", 00:13:58.389 "raid_level": "raid0", 00:13:58.389 "superblock": false, 00:13:58.389 "num_base_bdevs": 3, 00:13:58.389 "num_base_bdevs_discovered": 1, 00:13:58.389 "num_base_bdevs_operational": 3, 00:13:58.389 "base_bdevs_list": [ 00:13:58.389 { 00:13:58.389 "name": null, 00:13:58.389 "uuid": "309deabc-4102-410b-84c1-e2381fa9c574", 00:13:58.389 "is_configured": false, 00:13:58.389 "data_offset": 0, 00:13:58.389 "data_size": 65536 00:13:58.389 }, 00:13:58.389 { 00:13:58.389 "name": null, 00:13:58.389 "uuid": "4c82a6be-cc80-4521-a724-67c4649f5ab8", 00:13:58.389 "is_configured": false, 00:13:58.389 "data_offset": 0, 00:13:58.389 "data_size": 65536 00:13:58.389 }, 00:13:58.389 { 00:13:58.389 "name": "BaseBdev3", 00:13:58.389 "uuid": "f668cc02-b4fd-4df4-ae9e-89fe3401d5da", 00:13:58.389 "is_configured": true, 00:13:58.389 "data_offset": 0, 00:13:58.389 "data_size": 65536 00:13:58.389 } 00:13:58.389 ] 00:13:58.389 }' 00:13:58.389 13:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.389 13:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.956 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.956 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:59.215 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:59.215 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:59.474 [2024-07-25 13:14:09.795190] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.474 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:59.474 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:59.474 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:59.474 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:59.474 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:59.474 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:59.474 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:59.474 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:59.474 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:59.474 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:59.474 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.474 13:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.732 13:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:59.732 "name": "Existed_Raid", 00:13:59.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.732 "strip_size_kb": 64, 00:13:59.732 "state": "configuring", 00:13:59.732 "raid_level": "raid0", 00:13:59.732 "superblock": false, 00:13:59.732 "num_base_bdevs": 3, 00:13:59.732 "num_base_bdevs_discovered": 2, 00:13:59.732 "num_base_bdevs_operational": 3, 00:13:59.732 "base_bdevs_list": [ 00:13:59.732 { 00:13:59.732 "name": null, 00:13:59.732 "uuid": "309deabc-4102-410b-84c1-e2381fa9c574", 00:13:59.732 "is_configured": false, 00:13:59.732 "data_offset": 0, 00:13:59.732 "data_size": 65536 00:13:59.732 }, 00:13:59.732 { 00:13:59.732 "name": "BaseBdev2", 00:13:59.732 "uuid": "4c82a6be-cc80-4521-a724-67c4649f5ab8", 00:13:59.732 "is_configured": true, 00:13:59.732 "data_offset": 0, 00:13:59.732 "data_size": 65536 00:13:59.732 }, 00:13:59.732 { 00:13:59.732 "name": "BaseBdev3", 00:13:59.732 "uuid": "f668cc02-b4fd-4df4-ae9e-89fe3401d5da", 00:13:59.732 "is_configured": true, 00:13:59.732 "data_offset": 0, 00:13:59.732 "data_size": 65536 00:13:59.732 } 00:13:59.732 ] 00:13:59.732 }' 00:13:59.732 13:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:59.732 13:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.301 13:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.301 13:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:00.561 13:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:00.561 13:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.561 13:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:00.822 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 309deabc-4102-410b-84c1-e2381fa9c574 00:14:01.080 [2024-07-25 13:14:11.310269] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:01.080 [2024-07-25 13:14:11.310302] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x27776f0 00:14:01.080 [2024-07-25 13:14:11.310309] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:01.080 [2024-07-25 13:14:11.310482] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x277c9c0 00:14:01.080 [2024-07-25 13:14:11.310586] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x27776f0 00:14:01.080 [2024-07-25 13:14:11.310594] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x27776f0 00:14:01.080 [2024-07-25 13:14:11.310735] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.080 NewBaseBdev 00:14:01.080 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:01.080 13:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:01.080 13:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:01.080 13:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:01.080 13:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:01.080 13:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:01.080 13:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.081 13:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:01.339 [ 00:14:01.339 { 00:14:01.339 "name": "NewBaseBdev", 00:14:01.339 "aliases": [ 00:14:01.339 "309deabc-4102-410b-84c1-e2381fa9c574" 00:14:01.339 ], 00:14:01.339 "product_name": "Malloc disk", 00:14:01.339 "block_size": 512, 00:14:01.339 "num_blocks": 65536, 00:14:01.339 "uuid": "309deabc-4102-410b-84c1-e2381fa9c574", 00:14:01.339 "assigned_rate_limits": { 00:14:01.339 "rw_ios_per_sec": 0, 00:14:01.339 "rw_mbytes_per_sec": 0, 00:14:01.339 "r_mbytes_per_sec": 0, 00:14:01.339 "w_mbytes_per_sec": 0 00:14:01.339 }, 00:14:01.339 "claimed": true, 00:14:01.339 "claim_type": "exclusive_write", 00:14:01.339 "zoned": false, 00:14:01.339 "supported_io_types": { 00:14:01.339 "read": true, 00:14:01.339 "write": true, 00:14:01.339 "unmap": true, 00:14:01.339 "flush": true, 00:14:01.339 "reset": true, 00:14:01.339 "nvme_admin": false, 00:14:01.339 "nvme_io": false, 00:14:01.339 "nvme_io_md": false, 00:14:01.339 "write_zeroes": true, 00:14:01.339 "zcopy": true, 00:14:01.339 "get_zone_info": false, 00:14:01.339 "zone_management": false, 00:14:01.339 "zone_append": false, 00:14:01.339 "compare": false, 00:14:01.339 "compare_and_write": false, 00:14:01.339 "abort": true, 00:14:01.339 "seek_hole": false, 00:14:01.339 "seek_data": false, 00:14:01.339 "copy": true, 00:14:01.339 "nvme_iov_md": false 00:14:01.339 }, 00:14:01.339 "memory_domains": [ 00:14:01.339 { 00:14:01.339 "dma_device_id": "system", 00:14:01.339 "dma_device_type": 1 00:14:01.339 }, 00:14:01.339 { 00:14:01.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.339 "dma_device_type": 2 00:14:01.339 } 00:14:01.339 ], 00:14:01.339 "driver_specific": {} 00:14:01.339 } 00:14:01.339 ] 00:14:01.339 13:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:01.339 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:01.339 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.339 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:01.339 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:01.339 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.340 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:01.340 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.340 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.340 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.340 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.340 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.340 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.599 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.599 "name": "Existed_Raid", 00:14:01.599 "uuid": "c97f2a37-c952-48dc-9122-79136bc9d64a", 00:14:01.599 "strip_size_kb": 64, 00:14:01.599 "state": "online", 00:14:01.599 "raid_level": "raid0", 00:14:01.599 "superblock": false, 00:14:01.599 "num_base_bdevs": 3, 00:14:01.599 "num_base_bdevs_discovered": 3, 00:14:01.599 "num_base_bdevs_operational": 3, 00:14:01.599 "base_bdevs_list": [ 00:14:01.599 { 00:14:01.599 "name": "NewBaseBdev", 00:14:01.599 "uuid": "309deabc-4102-410b-84c1-e2381fa9c574", 00:14:01.599 "is_configured": true, 00:14:01.599 "data_offset": 0, 00:14:01.599 "data_size": 65536 00:14:01.599 }, 00:14:01.599 { 00:14:01.599 "name": "BaseBdev2", 00:14:01.599 "uuid": "4c82a6be-cc80-4521-a724-67c4649f5ab8", 00:14:01.599 "is_configured": true, 00:14:01.599 "data_offset": 0, 00:14:01.599 "data_size": 65536 00:14:01.599 }, 00:14:01.599 { 00:14:01.599 "name": "BaseBdev3", 00:14:01.599 "uuid": "f668cc02-b4fd-4df4-ae9e-89fe3401d5da", 00:14:01.599 "is_configured": true, 00:14:01.599 "data_offset": 0, 00:14:01.599 "data_size": 65536 00:14:01.599 } 00:14:01.599 ] 00:14:01.599 }' 00:14:01.599 13:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.599 13:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.168 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:02.168 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:02.168 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:02.168 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:02.168 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:02.168 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:02.168 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:02.168 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:02.428 [2024-07-25 13:14:12.778426] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.428 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:02.428 "name": "Existed_Raid", 00:14:02.428 "aliases": [ 00:14:02.428 "c97f2a37-c952-48dc-9122-79136bc9d64a" 00:14:02.428 ], 00:14:02.428 "product_name": "Raid Volume", 00:14:02.428 "block_size": 512, 00:14:02.428 "num_blocks": 196608, 00:14:02.428 "uuid": "c97f2a37-c952-48dc-9122-79136bc9d64a", 00:14:02.428 "assigned_rate_limits": { 00:14:02.428 "rw_ios_per_sec": 0, 00:14:02.428 "rw_mbytes_per_sec": 0, 00:14:02.428 "r_mbytes_per_sec": 0, 00:14:02.428 "w_mbytes_per_sec": 0 00:14:02.428 }, 00:14:02.428 "claimed": false, 00:14:02.428 "zoned": false, 00:14:02.428 "supported_io_types": { 00:14:02.428 "read": true, 00:14:02.428 "write": true, 00:14:02.428 "unmap": true, 00:14:02.428 "flush": true, 00:14:02.428 "reset": true, 00:14:02.428 "nvme_admin": false, 00:14:02.428 "nvme_io": false, 00:14:02.428 "nvme_io_md": false, 00:14:02.428 "write_zeroes": true, 00:14:02.428 "zcopy": false, 00:14:02.428 "get_zone_info": false, 00:14:02.428 "zone_management": false, 00:14:02.428 "zone_append": false, 00:14:02.428 "compare": false, 00:14:02.428 "compare_and_write": false, 00:14:02.428 "abort": false, 00:14:02.428 "seek_hole": false, 00:14:02.428 "seek_data": false, 00:14:02.428 "copy": false, 00:14:02.428 "nvme_iov_md": false 00:14:02.428 }, 00:14:02.428 "memory_domains": [ 00:14:02.428 { 00:14:02.428 "dma_device_id": "system", 00:14:02.428 "dma_device_type": 1 00:14:02.428 }, 00:14:02.428 { 00:14:02.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.428 "dma_device_type": 2 00:14:02.428 }, 00:14:02.428 { 00:14:02.428 "dma_device_id": "system", 00:14:02.428 "dma_device_type": 1 00:14:02.428 }, 00:14:02.428 { 00:14:02.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.428 "dma_device_type": 2 00:14:02.428 }, 00:14:02.428 { 00:14:02.428 "dma_device_id": "system", 00:14:02.428 "dma_device_type": 1 00:14:02.428 }, 00:14:02.428 { 00:14:02.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.428 "dma_device_type": 2 00:14:02.428 } 00:14:02.428 ], 00:14:02.428 "driver_specific": { 00:14:02.428 "raid": { 00:14:02.428 "uuid": "c97f2a37-c952-48dc-9122-79136bc9d64a", 00:14:02.428 "strip_size_kb": 64, 00:14:02.428 "state": "online", 00:14:02.428 "raid_level": "raid0", 00:14:02.428 "superblock": false, 00:14:02.428 "num_base_bdevs": 3, 00:14:02.428 "num_base_bdevs_discovered": 3, 00:14:02.428 "num_base_bdevs_operational": 3, 00:14:02.428 "base_bdevs_list": [ 00:14:02.428 { 00:14:02.428 "name": "NewBaseBdev", 00:14:02.428 "uuid": "309deabc-4102-410b-84c1-e2381fa9c574", 00:14:02.428 "is_configured": true, 00:14:02.428 "data_offset": 0, 00:14:02.428 "data_size": 65536 00:14:02.428 }, 00:14:02.428 { 00:14:02.428 "name": "BaseBdev2", 00:14:02.428 "uuid": "4c82a6be-cc80-4521-a724-67c4649f5ab8", 00:14:02.428 "is_configured": true, 00:14:02.428 "data_offset": 0, 00:14:02.428 "data_size": 65536 00:14:02.428 }, 00:14:02.428 { 00:14:02.428 "name": "BaseBdev3", 00:14:02.428 "uuid": "f668cc02-b4fd-4df4-ae9e-89fe3401d5da", 00:14:02.428 "is_configured": true, 00:14:02.428 "data_offset": 0, 00:14:02.428 "data_size": 65536 00:14:02.428 } 00:14:02.428 ] 00:14:02.428 } 00:14:02.428 } 00:14:02.428 }' 00:14:02.428 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.428 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:02.428 BaseBdev2 00:14:02.428 BaseBdev3' 00:14:02.428 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.428 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:02.428 13:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:02.687 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:02.687 "name": "NewBaseBdev", 00:14:02.687 "aliases": [ 00:14:02.687 "309deabc-4102-410b-84c1-e2381fa9c574" 00:14:02.687 ], 00:14:02.687 "product_name": "Malloc disk", 00:14:02.687 "block_size": 512, 00:14:02.687 "num_blocks": 65536, 00:14:02.687 "uuid": "309deabc-4102-410b-84c1-e2381fa9c574", 00:14:02.687 "assigned_rate_limits": { 00:14:02.687 "rw_ios_per_sec": 0, 00:14:02.687 "rw_mbytes_per_sec": 0, 00:14:02.687 "r_mbytes_per_sec": 0, 00:14:02.687 "w_mbytes_per_sec": 0 00:14:02.687 }, 00:14:02.687 "claimed": true, 00:14:02.687 "claim_type": "exclusive_write", 00:14:02.687 "zoned": false, 00:14:02.687 "supported_io_types": { 00:14:02.687 "read": true, 00:14:02.687 "write": true, 00:14:02.687 "unmap": true, 00:14:02.687 "flush": true, 00:14:02.687 "reset": true, 00:14:02.687 "nvme_admin": false, 00:14:02.687 "nvme_io": false, 00:14:02.687 "nvme_io_md": false, 00:14:02.687 "write_zeroes": true, 00:14:02.687 "zcopy": true, 00:14:02.687 "get_zone_info": false, 00:14:02.687 "zone_management": false, 00:14:02.687 "zone_append": false, 00:14:02.687 "compare": false, 00:14:02.687 "compare_and_write": false, 00:14:02.687 "abort": true, 00:14:02.687 "seek_hole": false, 00:14:02.687 "seek_data": false, 00:14:02.687 "copy": true, 00:14:02.687 "nvme_iov_md": false 00:14:02.687 }, 00:14:02.687 "memory_domains": [ 00:14:02.687 { 00:14:02.687 "dma_device_id": "system", 00:14:02.687 "dma_device_type": 1 00:14:02.687 }, 00:14:02.687 { 00:14:02.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.687 "dma_device_type": 2 00:14:02.687 } 00:14:02.687 ], 00:14:02.687 "driver_specific": {} 00:14:02.687 }' 00:14:02.687 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.687 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.687 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:02.687 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.946 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.946 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:02.946 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.946 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.946 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:02.946 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.946 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.946 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:02.946 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.946 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:02.946 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:03.204 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:03.204 "name": "BaseBdev2", 00:14:03.204 "aliases": [ 00:14:03.204 "4c82a6be-cc80-4521-a724-67c4649f5ab8" 00:14:03.204 ], 00:14:03.204 "product_name": "Malloc disk", 00:14:03.204 "block_size": 512, 00:14:03.204 "num_blocks": 65536, 00:14:03.204 "uuid": "4c82a6be-cc80-4521-a724-67c4649f5ab8", 00:14:03.204 "assigned_rate_limits": { 00:14:03.204 "rw_ios_per_sec": 0, 00:14:03.204 "rw_mbytes_per_sec": 0, 00:14:03.204 "r_mbytes_per_sec": 0, 00:14:03.204 "w_mbytes_per_sec": 0 00:14:03.204 }, 00:14:03.204 "claimed": true, 00:14:03.204 "claim_type": "exclusive_write", 00:14:03.204 "zoned": false, 00:14:03.204 "supported_io_types": { 00:14:03.204 "read": true, 00:14:03.204 "write": true, 00:14:03.204 "unmap": true, 00:14:03.204 "flush": true, 00:14:03.204 "reset": true, 00:14:03.204 "nvme_admin": false, 00:14:03.204 "nvme_io": false, 00:14:03.204 "nvme_io_md": false, 00:14:03.204 "write_zeroes": true, 00:14:03.204 "zcopy": true, 00:14:03.204 "get_zone_info": false, 00:14:03.204 "zone_management": false, 00:14:03.204 "zone_append": false, 00:14:03.204 "compare": false, 00:14:03.204 "compare_and_write": false, 00:14:03.204 "abort": true, 00:14:03.204 "seek_hole": false, 00:14:03.204 "seek_data": false, 00:14:03.204 "copy": true, 00:14:03.204 "nvme_iov_md": false 00:14:03.204 }, 00:14:03.204 "memory_domains": [ 00:14:03.204 { 00:14:03.204 "dma_device_id": "system", 00:14:03.204 "dma_device_type": 1 00:14:03.204 }, 00:14:03.204 { 00:14:03.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.204 "dma_device_type": 2 00:14:03.204 } 00:14:03.204 ], 00:14:03.204 "driver_specific": {} 00:14:03.204 }' 00:14:03.204 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.204 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.462 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:03.462 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.462 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.462 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:03.462 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.462 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.462 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.462 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.462 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.721 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.721 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:03.721 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:03.721 13:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:03.721 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:03.721 "name": "BaseBdev3", 00:14:03.721 "aliases": [ 00:14:03.721 "f668cc02-b4fd-4df4-ae9e-89fe3401d5da" 00:14:03.721 ], 00:14:03.721 "product_name": "Malloc disk", 00:14:03.721 "block_size": 512, 00:14:03.721 "num_blocks": 65536, 00:14:03.721 "uuid": "f668cc02-b4fd-4df4-ae9e-89fe3401d5da", 00:14:03.721 "assigned_rate_limits": { 00:14:03.721 "rw_ios_per_sec": 0, 00:14:03.721 "rw_mbytes_per_sec": 0, 00:14:03.721 "r_mbytes_per_sec": 0, 00:14:03.721 "w_mbytes_per_sec": 0 00:14:03.721 }, 00:14:03.721 "claimed": true, 00:14:03.721 "claim_type": "exclusive_write", 00:14:03.721 "zoned": false, 00:14:03.721 "supported_io_types": { 00:14:03.721 "read": true, 00:14:03.721 "write": true, 00:14:03.721 "unmap": true, 00:14:03.721 "flush": true, 00:14:03.721 "reset": true, 00:14:03.721 "nvme_admin": false, 00:14:03.721 "nvme_io": false, 00:14:03.721 "nvme_io_md": false, 00:14:03.721 "write_zeroes": true, 00:14:03.721 "zcopy": true, 00:14:03.721 "get_zone_info": false, 00:14:03.721 "zone_management": false, 00:14:03.721 "zone_append": false, 00:14:03.721 "compare": false, 00:14:03.721 "compare_and_write": false, 00:14:03.721 "abort": true, 00:14:03.721 "seek_hole": false, 00:14:03.721 "seek_data": false, 00:14:03.721 "copy": true, 00:14:03.721 "nvme_iov_md": false 00:14:03.721 }, 00:14:03.721 "memory_domains": [ 00:14:03.721 { 00:14:03.721 "dma_device_id": "system", 00:14:03.721 "dma_device_type": 1 00:14:03.721 }, 00:14:03.721 { 00:14:03.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.721 "dma_device_type": 2 00:14:03.721 } 00:14:03.721 ], 00:14:03.721 "driver_specific": {} 00:14:03.721 }' 00:14:03.721 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.980 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.980 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:03.980 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.980 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.980 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:03.980 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.980 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.980 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.980 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:04.239 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:04.239 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:04.239 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:04.497 [2024-07-25 13:14:14.747369] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.497 [2024-07-25 13:14:14.747393] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.497 [2024-07-25 13:14:14.747439] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.497 [2024-07-25 13:14:14.747484] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.497 [2024-07-25 13:14:14.747495] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27776f0 name Existed_Raid, state offline 00:14:04.497 13:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 850829 00:14:04.497 13:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 850829 ']' 00:14:04.497 13:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 850829 00:14:04.497 13:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:04.497 13:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.497 13:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 850829 00:14:04.497 13:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.497 13:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.497 13:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 850829' 00:14:04.497 killing process with pid 850829 00:14:04.497 13:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 850829 00:14:04.497 [2024-07-25 13:14:14.817525] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.497 13:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 850829 00:14:04.497 [2024-07-25 13:14:14.841495] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.759 13:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:04.759 00:14:04.759 real 0m26.786s 00:14:04.759 user 0m48.992s 00:14:04.759 sys 0m4.926s 00:14:04.759 13:14:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.759 13:14:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.759 ************************************ 00:14:04.759 END TEST raid_state_function_test 00:14:04.759 ************************************ 00:14:04.759 13:14:15 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:14:04.760 13:14:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:04.760 13:14:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.760 13:14:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.760 ************************************ 00:14:04.760 START TEST raid_state_function_test_sb 00:14:04.760 ************************************ 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=855937 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 855937' 00:14:04.760 Process raid pid: 855937 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 855937 /var/tmp/spdk-raid.sock 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 855937 ']' 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:04.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.760 13:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.760 [2024-07-25 13:14:15.169700] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:14:04.760 [2024-07-25 13:14:15.169754] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.760 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:04.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.761 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:04.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.761 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:04.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.761 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:04.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.761 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:05.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.077 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:05.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.077 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:05.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.077 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:05.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.077 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:05.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.077 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:05.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.077 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:05.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.077 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:05.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.077 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:05.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.077 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:05.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:05.077 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:05.077 [2024-07-25 13:14:15.303768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.077 [2024-07-25 13:14:15.389635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.077 [2024-07-25 13:14:15.455801] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.077 [2024-07-25 13:14:15.455834] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.646 13:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:05.646 13:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:05.646 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:05.905 [2024-07-25 13:14:16.259410] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:05.905 [2024-07-25 13:14:16.259450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:05.905 [2024-07-25 13:14:16.259461] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:05.905 [2024-07-25 13:14:16.259472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:05.905 [2024-07-25 13:14:16.259480] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:05.905 [2024-07-25 13:14:16.259490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:05.905 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:05.905 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.905 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:05.905 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:05.905 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:05.905 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:05.905 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.905 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.905 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.905 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.905 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.905 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.164 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.164 "name": "Existed_Raid", 00:14:06.164 "uuid": "43fe1ea8-2f66-4fb2-ad0c-3bb8973084b4", 00:14:06.164 "strip_size_kb": 64, 00:14:06.164 "state": "configuring", 00:14:06.164 "raid_level": "raid0", 00:14:06.164 "superblock": true, 00:14:06.164 "num_base_bdevs": 3, 00:14:06.164 "num_base_bdevs_discovered": 0, 00:14:06.164 "num_base_bdevs_operational": 3, 00:14:06.164 "base_bdevs_list": [ 00:14:06.164 { 00:14:06.164 "name": "BaseBdev1", 00:14:06.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.164 "is_configured": false, 00:14:06.164 "data_offset": 0, 00:14:06.164 "data_size": 0 00:14:06.164 }, 00:14:06.164 { 00:14:06.164 "name": "BaseBdev2", 00:14:06.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.164 "is_configured": false, 00:14:06.164 "data_offset": 0, 00:14:06.164 "data_size": 0 00:14:06.164 }, 00:14:06.164 { 00:14:06.164 "name": "BaseBdev3", 00:14:06.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.164 "is_configured": false, 00:14:06.164 "data_offset": 0, 00:14:06.164 "data_size": 0 00:14:06.164 } 00:14:06.164 ] 00:14:06.164 }' 00:14:06.164 13:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.164 13:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.101 13:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:07.360 [2024-07-25 13:14:17.807323] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:07.360 [2024-07-25 13:14:17.807354] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc0ff40 name Existed_Raid, state configuring 00:14:07.360 13:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:07.619 [2024-07-25 13:14:18.035938] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.619 [2024-07-25 13:14:18.035966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.620 [2024-07-25 13:14:18.035975] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.620 [2024-07-25 13:14:18.035990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.620 [2024-07-25 13:14:18.035998] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:07.620 [2024-07-25 13:14:18.036008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:07.620 13:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.189 [2024-07-25 13:14:18.538753] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.189 BaseBdev1 00:14:08.189 13:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:08.189 13:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:08.189 13:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:08.189 13:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:08.189 13:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:08.189 13:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:08.189 13:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:08.448 13:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.015 [ 00:14:09.015 { 00:14:09.016 "name": "BaseBdev1", 00:14:09.016 "aliases": [ 00:14:09.016 "1543719e-36f6-4d56-be0b-0ead923507b6" 00:14:09.016 ], 00:14:09.016 "product_name": "Malloc disk", 00:14:09.016 "block_size": 512, 00:14:09.016 "num_blocks": 65536, 00:14:09.016 "uuid": "1543719e-36f6-4d56-be0b-0ead923507b6", 00:14:09.016 "assigned_rate_limits": { 00:14:09.016 "rw_ios_per_sec": 0, 00:14:09.016 "rw_mbytes_per_sec": 0, 00:14:09.016 "r_mbytes_per_sec": 0, 00:14:09.016 "w_mbytes_per_sec": 0 00:14:09.016 }, 00:14:09.016 "claimed": true, 00:14:09.016 "claim_type": "exclusive_write", 00:14:09.016 "zoned": false, 00:14:09.016 "supported_io_types": { 00:14:09.016 "read": true, 00:14:09.016 "write": true, 00:14:09.016 "unmap": true, 00:14:09.016 "flush": true, 00:14:09.016 "reset": true, 00:14:09.016 "nvme_admin": false, 00:14:09.016 "nvme_io": false, 00:14:09.016 "nvme_io_md": false, 00:14:09.016 "write_zeroes": true, 00:14:09.016 "zcopy": true, 00:14:09.016 "get_zone_info": false, 00:14:09.016 "zone_management": false, 00:14:09.016 "zone_append": false, 00:14:09.016 "compare": false, 00:14:09.016 "compare_and_write": false, 00:14:09.016 "abort": true, 00:14:09.016 "seek_hole": false, 00:14:09.016 "seek_data": false, 00:14:09.016 "copy": true, 00:14:09.016 "nvme_iov_md": false 00:14:09.016 }, 00:14:09.016 "memory_domains": [ 00:14:09.016 { 00:14:09.016 "dma_device_id": "system", 00:14:09.016 "dma_device_type": 1 00:14:09.016 }, 00:14:09.016 { 00:14:09.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.016 "dma_device_type": 2 00:14:09.016 } 00:14:09.016 ], 00:14:09.016 "driver_specific": {} 00:14:09.016 } 00:14:09.016 ] 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.016 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.275 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.275 "name": "Existed_Raid", 00:14:09.275 "uuid": "e8353008-7f16-40df-a25e-32d76a4654e2", 00:14:09.275 "strip_size_kb": 64, 00:14:09.275 "state": "configuring", 00:14:09.275 "raid_level": "raid0", 00:14:09.275 "superblock": true, 00:14:09.275 "num_base_bdevs": 3, 00:14:09.275 "num_base_bdevs_discovered": 1, 00:14:09.275 "num_base_bdevs_operational": 3, 00:14:09.275 "base_bdevs_list": [ 00:14:09.275 { 00:14:09.275 "name": "BaseBdev1", 00:14:09.275 "uuid": "1543719e-36f6-4d56-be0b-0ead923507b6", 00:14:09.275 "is_configured": true, 00:14:09.275 "data_offset": 2048, 00:14:09.275 "data_size": 63488 00:14:09.275 }, 00:14:09.275 { 00:14:09.275 "name": "BaseBdev2", 00:14:09.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.275 "is_configured": false, 00:14:09.275 "data_offset": 0, 00:14:09.275 "data_size": 0 00:14:09.275 }, 00:14:09.275 { 00:14:09.275 "name": "BaseBdev3", 00:14:09.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.275 "is_configured": false, 00:14:09.275 "data_offset": 0, 00:14:09.275 "data_size": 0 00:14:09.275 } 00:14:09.275 ] 00:14:09.275 }' 00:14:09.275 13:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.275 13:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.843 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:09.843 [2024-07-25 13:14:20.311422] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.843 [2024-07-25 13:14:20.311458] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc0f810 name Existed_Raid, state configuring 00:14:09.843 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:10.102 [2024-07-25 13:14:20.524023] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.102 [2024-07-25 13:14:20.525432] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.102 [2024-07-25 13:14:20.525464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.102 [2024-07-25 13:14:20.525474] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:10.102 [2024-07-25 13:14:20.525485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.102 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.361 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:10.361 "name": "Existed_Raid", 00:14:10.361 "uuid": "aa718501-c65a-4ee6-99c7-b9800a2f8ade", 00:14:10.361 "strip_size_kb": 64, 00:14:10.361 "state": "configuring", 00:14:10.361 "raid_level": "raid0", 00:14:10.361 "superblock": true, 00:14:10.361 "num_base_bdevs": 3, 00:14:10.361 "num_base_bdevs_discovered": 1, 00:14:10.361 "num_base_bdevs_operational": 3, 00:14:10.361 "base_bdevs_list": [ 00:14:10.361 { 00:14:10.361 "name": "BaseBdev1", 00:14:10.361 "uuid": "1543719e-36f6-4d56-be0b-0ead923507b6", 00:14:10.361 "is_configured": true, 00:14:10.361 "data_offset": 2048, 00:14:10.361 "data_size": 63488 00:14:10.361 }, 00:14:10.361 { 00:14:10.361 "name": "BaseBdev2", 00:14:10.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.361 "is_configured": false, 00:14:10.361 "data_offset": 0, 00:14:10.361 "data_size": 0 00:14:10.361 }, 00:14:10.361 { 00:14:10.361 "name": "BaseBdev3", 00:14:10.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.361 "is_configured": false, 00:14:10.361 "data_offset": 0, 00:14:10.361 "data_size": 0 00:14:10.361 } 00:14:10.361 ] 00:14:10.361 }' 00:14:10.361 13:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:10.361 13:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.929 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:11.189 [2024-07-25 13:14:21.541762] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.189 BaseBdev2 00:14:11.189 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:11.189 13:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:11.189 13:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:11.189 13:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:11.189 13:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:11.189 13:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:11.189 13:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:11.448 13:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:11.707 [ 00:14:11.707 { 00:14:11.707 "name": "BaseBdev2", 00:14:11.707 "aliases": [ 00:14:11.707 "e63556c3-9028-48d5-aaae-cc7f9f2b2105" 00:14:11.707 ], 00:14:11.707 "product_name": "Malloc disk", 00:14:11.707 "block_size": 512, 00:14:11.707 "num_blocks": 65536, 00:14:11.707 "uuid": "e63556c3-9028-48d5-aaae-cc7f9f2b2105", 00:14:11.707 "assigned_rate_limits": { 00:14:11.707 "rw_ios_per_sec": 0, 00:14:11.707 "rw_mbytes_per_sec": 0, 00:14:11.707 "r_mbytes_per_sec": 0, 00:14:11.707 "w_mbytes_per_sec": 0 00:14:11.707 }, 00:14:11.707 "claimed": true, 00:14:11.707 "claim_type": "exclusive_write", 00:14:11.707 "zoned": false, 00:14:11.707 "supported_io_types": { 00:14:11.707 "read": true, 00:14:11.707 "write": true, 00:14:11.707 "unmap": true, 00:14:11.707 "flush": true, 00:14:11.707 "reset": true, 00:14:11.707 "nvme_admin": false, 00:14:11.707 "nvme_io": false, 00:14:11.707 "nvme_io_md": false, 00:14:11.707 "write_zeroes": true, 00:14:11.707 "zcopy": true, 00:14:11.707 "get_zone_info": false, 00:14:11.707 "zone_management": false, 00:14:11.707 "zone_append": false, 00:14:11.707 "compare": false, 00:14:11.707 "compare_and_write": false, 00:14:11.707 "abort": true, 00:14:11.707 "seek_hole": false, 00:14:11.707 "seek_data": false, 00:14:11.707 "copy": true, 00:14:11.707 "nvme_iov_md": false 00:14:11.707 }, 00:14:11.707 "memory_domains": [ 00:14:11.707 { 00:14:11.707 "dma_device_id": "system", 00:14:11.707 "dma_device_type": 1 00:14:11.707 }, 00:14:11.707 { 00:14:11.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.707 "dma_device_type": 2 00:14:11.707 } 00:14:11.707 ], 00:14:11.707 "driver_specific": {} 00:14:11.707 } 00:14:11.707 ] 00:14:11.707 13:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:11.707 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:11.707 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:11.708 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:11.708 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:11.708 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:11.708 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:11.708 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:11.708 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:11.708 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:11.708 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:11.708 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:11.708 13:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:11.708 13:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.708 13:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.967 13:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:11.967 "name": "Existed_Raid", 00:14:11.967 "uuid": "aa718501-c65a-4ee6-99c7-b9800a2f8ade", 00:14:11.967 "strip_size_kb": 64, 00:14:11.967 "state": "configuring", 00:14:11.967 "raid_level": "raid0", 00:14:11.967 "superblock": true, 00:14:11.967 "num_base_bdevs": 3, 00:14:11.967 "num_base_bdevs_discovered": 2, 00:14:11.967 "num_base_bdevs_operational": 3, 00:14:11.967 "base_bdevs_list": [ 00:14:11.967 { 00:14:11.967 "name": "BaseBdev1", 00:14:11.967 "uuid": "1543719e-36f6-4d56-be0b-0ead923507b6", 00:14:11.967 "is_configured": true, 00:14:11.967 "data_offset": 2048, 00:14:11.967 "data_size": 63488 00:14:11.967 }, 00:14:11.967 { 00:14:11.967 "name": "BaseBdev2", 00:14:11.967 "uuid": "e63556c3-9028-48d5-aaae-cc7f9f2b2105", 00:14:11.967 "is_configured": true, 00:14:11.967 "data_offset": 2048, 00:14:11.967 "data_size": 63488 00:14:11.967 }, 00:14:11.967 { 00:14:11.967 "name": "BaseBdev3", 00:14:11.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.967 "is_configured": false, 00:14:11.967 "data_offset": 0, 00:14:11.967 "data_size": 0 00:14:11.967 } 00:14:11.967 ] 00:14:11.967 }' 00:14:11.967 13:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:11.967 13:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.536 13:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:12.536 [2024-07-25 13:14:23.000715] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.536 [2024-07-25 13:14:23.000855] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc10710 00:14:12.536 [2024-07-25 13:14:23.000868] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:12.536 [2024-07-25 13:14:23.001030] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc071e0 00:14:12.536 [2024-07-25 13:14:23.001137] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc10710 00:14:12.536 [2024-07-25 13:14:23.001156] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xc10710 00:14:12.536 [2024-07-25 13:14:23.001240] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.536 BaseBdev3 00:14:12.536 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:12.536 13:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:12.536 13:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:12.536 13:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:12.536 13:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:12.536 13:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:12.536 13:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.795 13:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:13.056 [ 00:14:13.056 { 00:14:13.056 "name": "BaseBdev3", 00:14:13.056 "aliases": [ 00:14:13.056 "62dc1869-624d-4f66-94f7-20a4529316a5" 00:14:13.056 ], 00:14:13.056 "product_name": "Malloc disk", 00:14:13.056 "block_size": 512, 00:14:13.056 "num_blocks": 65536, 00:14:13.056 "uuid": "62dc1869-624d-4f66-94f7-20a4529316a5", 00:14:13.056 "assigned_rate_limits": { 00:14:13.056 "rw_ios_per_sec": 0, 00:14:13.056 "rw_mbytes_per_sec": 0, 00:14:13.056 "r_mbytes_per_sec": 0, 00:14:13.056 "w_mbytes_per_sec": 0 00:14:13.056 }, 00:14:13.056 "claimed": true, 00:14:13.056 "claim_type": "exclusive_write", 00:14:13.056 "zoned": false, 00:14:13.056 "supported_io_types": { 00:14:13.056 "read": true, 00:14:13.056 "write": true, 00:14:13.056 "unmap": true, 00:14:13.056 "flush": true, 00:14:13.056 "reset": true, 00:14:13.056 "nvme_admin": false, 00:14:13.056 "nvme_io": false, 00:14:13.056 "nvme_io_md": false, 00:14:13.056 "write_zeroes": true, 00:14:13.056 "zcopy": true, 00:14:13.056 "get_zone_info": false, 00:14:13.056 "zone_management": false, 00:14:13.056 "zone_append": false, 00:14:13.056 "compare": false, 00:14:13.056 "compare_and_write": false, 00:14:13.056 "abort": true, 00:14:13.056 "seek_hole": false, 00:14:13.056 "seek_data": false, 00:14:13.056 "copy": true, 00:14:13.056 "nvme_iov_md": false 00:14:13.056 }, 00:14:13.056 "memory_domains": [ 00:14:13.056 { 00:14:13.056 "dma_device_id": "system", 00:14:13.056 "dma_device_type": 1 00:14:13.056 }, 00:14:13.056 { 00:14:13.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.056 "dma_device_type": 2 00:14:13.056 } 00:14:13.056 ], 00:14:13.056 "driver_specific": {} 00:14:13.056 } 00:14:13.056 ] 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.056 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.314 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.314 "name": "Existed_Raid", 00:14:13.314 "uuid": "aa718501-c65a-4ee6-99c7-b9800a2f8ade", 00:14:13.314 "strip_size_kb": 64, 00:14:13.314 "state": "online", 00:14:13.314 "raid_level": "raid0", 00:14:13.314 "superblock": true, 00:14:13.314 "num_base_bdevs": 3, 00:14:13.314 "num_base_bdevs_discovered": 3, 00:14:13.314 "num_base_bdevs_operational": 3, 00:14:13.314 "base_bdevs_list": [ 00:14:13.314 { 00:14:13.314 "name": "BaseBdev1", 00:14:13.314 "uuid": "1543719e-36f6-4d56-be0b-0ead923507b6", 00:14:13.314 "is_configured": true, 00:14:13.314 "data_offset": 2048, 00:14:13.314 "data_size": 63488 00:14:13.314 }, 00:14:13.314 { 00:14:13.314 "name": "BaseBdev2", 00:14:13.314 "uuid": "e63556c3-9028-48d5-aaae-cc7f9f2b2105", 00:14:13.314 "is_configured": true, 00:14:13.314 "data_offset": 2048, 00:14:13.314 "data_size": 63488 00:14:13.314 }, 00:14:13.314 { 00:14:13.314 "name": "BaseBdev3", 00:14:13.315 "uuid": "62dc1869-624d-4f66-94f7-20a4529316a5", 00:14:13.315 "is_configured": true, 00:14:13.315 "data_offset": 2048, 00:14:13.315 "data_size": 63488 00:14:13.315 } 00:14:13.315 ] 00:14:13.315 }' 00:14:13.315 13:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.315 13:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.881 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:13.881 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:13.881 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:13.881 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:13.881 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:13.881 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:13.881 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:13.881 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:14.139 [2024-07-25 13:14:24.476873] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.139 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:14.139 "name": "Existed_Raid", 00:14:14.139 "aliases": [ 00:14:14.139 "aa718501-c65a-4ee6-99c7-b9800a2f8ade" 00:14:14.139 ], 00:14:14.139 "product_name": "Raid Volume", 00:14:14.139 "block_size": 512, 00:14:14.139 "num_blocks": 190464, 00:14:14.139 "uuid": "aa718501-c65a-4ee6-99c7-b9800a2f8ade", 00:14:14.139 "assigned_rate_limits": { 00:14:14.139 "rw_ios_per_sec": 0, 00:14:14.139 "rw_mbytes_per_sec": 0, 00:14:14.139 "r_mbytes_per_sec": 0, 00:14:14.139 "w_mbytes_per_sec": 0 00:14:14.139 }, 00:14:14.139 "claimed": false, 00:14:14.139 "zoned": false, 00:14:14.139 "supported_io_types": { 00:14:14.139 "read": true, 00:14:14.139 "write": true, 00:14:14.139 "unmap": true, 00:14:14.139 "flush": true, 00:14:14.139 "reset": true, 00:14:14.139 "nvme_admin": false, 00:14:14.139 "nvme_io": false, 00:14:14.139 "nvme_io_md": false, 00:14:14.139 "write_zeroes": true, 00:14:14.139 "zcopy": false, 00:14:14.139 "get_zone_info": false, 00:14:14.139 "zone_management": false, 00:14:14.139 "zone_append": false, 00:14:14.139 "compare": false, 00:14:14.139 "compare_and_write": false, 00:14:14.139 "abort": false, 00:14:14.139 "seek_hole": false, 00:14:14.139 "seek_data": false, 00:14:14.139 "copy": false, 00:14:14.139 "nvme_iov_md": false 00:14:14.139 }, 00:14:14.139 "memory_domains": [ 00:14:14.139 { 00:14:14.139 "dma_device_id": "system", 00:14:14.139 "dma_device_type": 1 00:14:14.139 }, 00:14:14.140 { 00:14:14.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.140 "dma_device_type": 2 00:14:14.140 }, 00:14:14.140 { 00:14:14.140 "dma_device_id": "system", 00:14:14.140 "dma_device_type": 1 00:14:14.140 }, 00:14:14.140 { 00:14:14.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.140 "dma_device_type": 2 00:14:14.140 }, 00:14:14.140 { 00:14:14.140 "dma_device_id": "system", 00:14:14.140 "dma_device_type": 1 00:14:14.140 }, 00:14:14.140 { 00:14:14.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.140 "dma_device_type": 2 00:14:14.140 } 00:14:14.140 ], 00:14:14.140 "driver_specific": { 00:14:14.140 "raid": { 00:14:14.140 "uuid": "aa718501-c65a-4ee6-99c7-b9800a2f8ade", 00:14:14.140 "strip_size_kb": 64, 00:14:14.140 "state": "online", 00:14:14.140 "raid_level": "raid0", 00:14:14.140 "superblock": true, 00:14:14.140 "num_base_bdevs": 3, 00:14:14.140 "num_base_bdevs_discovered": 3, 00:14:14.140 "num_base_bdevs_operational": 3, 00:14:14.140 "base_bdevs_list": [ 00:14:14.140 { 00:14:14.140 "name": "BaseBdev1", 00:14:14.140 "uuid": "1543719e-36f6-4d56-be0b-0ead923507b6", 00:14:14.140 "is_configured": true, 00:14:14.140 "data_offset": 2048, 00:14:14.140 "data_size": 63488 00:14:14.140 }, 00:14:14.140 { 00:14:14.140 "name": "BaseBdev2", 00:14:14.140 "uuid": "e63556c3-9028-48d5-aaae-cc7f9f2b2105", 00:14:14.140 "is_configured": true, 00:14:14.140 "data_offset": 2048, 00:14:14.140 "data_size": 63488 00:14:14.140 }, 00:14:14.140 { 00:14:14.140 "name": "BaseBdev3", 00:14:14.140 "uuid": "62dc1869-624d-4f66-94f7-20a4529316a5", 00:14:14.140 "is_configured": true, 00:14:14.140 "data_offset": 2048, 00:14:14.140 "data_size": 63488 00:14:14.140 } 00:14:14.140 ] 00:14:14.140 } 00:14:14.140 } 00:14:14.140 }' 00:14:14.140 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:14.140 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:14.140 BaseBdev2 00:14:14.140 BaseBdev3' 00:14:14.140 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:14.140 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:14.140 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:14.397 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:14.397 "name": "BaseBdev1", 00:14:14.397 "aliases": [ 00:14:14.398 "1543719e-36f6-4d56-be0b-0ead923507b6" 00:14:14.398 ], 00:14:14.398 "product_name": "Malloc disk", 00:14:14.398 "block_size": 512, 00:14:14.398 "num_blocks": 65536, 00:14:14.398 "uuid": "1543719e-36f6-4d56-be0b-0ead923507b6", 00:14:14.398 "assigned_rate_limits": { 00:14:14.398 "rw_ios_per_sec": 0, 00:14:14.398 "rw_mbytes_per_sec": 0, 00:14:14.398 "r_mbytes_per_sec": 0, 00:14:14.398 "w_mbytes_per_sec": 0 00:14:14.398 }, 00:14:14.398 "claimed": true, 00:14:14.398 "claim_type": "exclusive_write", 00:14:14.398 "zoned": false, 00:14:14.398 "supported_io_types": { 00:14:14.398 "read": true, 00:14:14.398 "write": true, 00:14:14.398 "unmap": true, 00:14:14.398 "flush": true, 00:14:14.398 "reset": true, 00:14:14.398 "nvme_admin": false, 00:14:14.398 "nvme_io": false, 00:14:14.398 "nvme_io_md": false, 00:14:14.398 "write_zeroes": true, 00:14:14.398 "zcopy": true, 00:14:14.398 "get_zone_info": false, 00:14:14.398 "zone_management": false, 00:14:14.398 "zone_append": false, 00:14:14.398 "compare": false, 00:14:14.398 "compare_and_write": false, 00:14:14.398 "abort": true, 00:14:14.398 "seek_hole": false, 00:14:14.398 "seek_data": false, 00:14:14.398 "copy": true, 00:14:14.398 "nvme_iov_md": false 00:14:14.398 }, 00:14:14.398 "memory_domains": [ 00:14:14.398 { 00:14:14.398 "dma_device_id": "system", 00:14:14.398 "dma_device_type": 1 00:14:14.398 }, 00:14:14.398 { 00:14:14.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.398 "dma_device_type": 2 00:14:14.398 } 00:14:14.398 ], 00:14:14.398 "driver_specific": {} 00:14:14.398 }' 00:14:14.398 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.398 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.398 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:14.398 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.657 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.657 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:14.657 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.657 13:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.657 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:14.657 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.657 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.657 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:14.657 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:14.657 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:14.657 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:14.915 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:14.915 "name": "BaseBdev2", 00:14:14.915 "aliases": [ 00:14:14.915 "e63556c3-9028-48d5-aaae-cc7f9f2b2105" 00:14:14.915 ], 00:14:14.915 "product_name": "Malloc disk", 00:14:14.915 "block_size": 512, 00:14:14.915 "num_blocks": 65536, 00:14:14.915 "uuid": "e63556c3-9028-48d5-aaae-cc7f9f2b2105", 00:14:14.915 "assigned_rate_limits": { 00:14:14.915 "rw_ios_per_sec": 0, 00:14:14.915 "rw_mbytes_per_sec": 0, 00:14:14.915 "r_mbytes_per_sec": 0, 00:14:14.915 "w_mbytes_per_sec": 0 00:14:14.915 }, 00:14:14.915 "claimed": true, 00:14:14.915 "claim_type": "exclusive_write", 00:14:14.915 "zoned": false, 00:14:14.915 "supported_io_types": { 00:14:14.915 "read": true, 00:14:14.915 "write": true, 00:14:14.915 "unmap": true, 00:14:14.915 "flush": true, 00:14:14.915 "reset": true, 00:14:14.915 "nvme_admin": false, 00:14:14.915 "nvme_io": false, 00:14:14.915 "nvme_io_md": false, 00:14:14.915 "write_zeroes": true, 00:14:14.915 "zcopy": true, 00:14:14.915 "get_zone_info": false, 00:14:14.915 "zone_management": false, 00:14:14.915 "zone_append": false, 00:14:14.915 "compare": false, 00:14:14.915 "compare_and_write": false, 00:14:14.915 "abort": true, 00:14:14.915 "seek_hole": false, 00:14:14.915 "seek_data": false, 00:14:14.915 "copy": true, 00:14:14.915 "nvme_iov_md": false 00:14:14.915 }, 00:14:14.915 "memory_domains": [ 00:14:14.915 { 00:14:14.915 "dma_device_id": "system", 00:14:14.915 "dma_device_type": 1 00:14:14.915 }, 00:14:14.915 { 00:14:14.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.915 "dma_device_type": 2 00:14:14.915 } 00:14:14.915 ], 00:14:14.915 "driver_specific": {} 00:14:14.915 }' 00:14:14.915 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.915 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.174 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:15.174 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.174 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.174 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:15.174 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.174 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.174 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:15.174 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.174 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.433 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:15.433 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:15.433 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:15.433 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:15.433 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:15.433 "name": "BaseBdev3", 00:14:15.433 "aliases": [ 00:14:15.433 "62dc1869-624d-4f66-94f7-20a4529316a5" 00:14:15.433 ], 00:14:15.433 "product_name": "Malloc disk", 00:14:15.433 "block_size": 512, 00:14:15.433 "num_blocks": 65536, 00:14:15.433 "uuid": "62dc1869-624d-4f66-94f7-20a4529316a5", 00:14:15.433 "assigned_rate_limits": { 00:14:15.433 "rw_ios_per_sec": 0, 00:14:15.433 "rw_mbytes_per_sec": 0, 00:14:15.433 "r_mbytes_per_sec": 0, 00:14:15.433 "w_mbytes_per_sec": 0 00:14:15.433 }, 00:14:15.433 "claimed": true, 00:14:15.433 "claim_type": "exclusive_write", 00:14:15.433 "zoned": false, 00:14:15.433 "supported_io_types": { 00:14:15.433 "read": true, 00:14:15.433 "write": true, 00:14:15.433 "unmap": true, 00:14:15.433 "flush": true, 00:14:15.433 "reset": true, 00:14:15.433 "nvme_admin": false, 00:14:15.433 "nvme_io": false, 00:14:15.433 "nvme_io_md": false, 00:14:15.433 "write_zeroes": true, 00:14:15.433 "zcopy": true, 00:14:15.433 "get_zone_info": false, 00:14:15.433 "zone_management": false, 00:14:15.433 "zone_append": false, 00:14:15.433 "compare": false, 00:14:15.433 "compare_and_write": false, 00:14:15.433 "abort": true, 00:14:15.433 "seek_hole": false, 00:14:15.433 "seek_data": false, 00:14:15.433 "copy": true, 00:14:15.433 "nvme_iov_md": false 00:14:15.433 }, 00:14:15.433 "memory_domains": [ 00:14:15.433 { 00:14:15.433 "dma_device_id": "system", 00:14:15.433 "dma_device_type": 1 00:14:15.433 }, 00:14:15.433 { 00:14:15.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.433 "dma_device_type": 2 00:14:15.433 } 00:14:15.433 ], 00:14:15.433 "driver_specific": {} 00:14:15.433 }' 00:14:15.433 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.691 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.691 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:15.691 13:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.691 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.691 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:15.692 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.692 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.692 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:15.692 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.950 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.950 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:15.950 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:16.208 [2024-07-25 13:14:26.466062] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.208 [2024-07-25 13:14:26.466086] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.208 [2024-07-25 13:14:26.466123] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.208 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:16.208 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:16.208 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:16.208 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:16.208 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:16.208 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:16.208 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:16.208 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:16.208 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:16.208 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:16.209 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:16.209 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:16.209 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:16.209 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:16.209 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:16.209 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.209 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.467 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.467 "name": "Existed_Raid", 00:14:16.467 "uuid": "aa718501-c65a-4ee6-99c7-b9800a2f8ade", 00:14:16.467 "strip_size_kb": 64, 00:14:16.467 "state": "offline", 00:14:16.467 "raid_level": "raid0", 00:14:16.467 "superblock": true, 00:14:16.467 "num_base_bdevs": 3, 00:14:16.467 "num_base_bdevs_discovered": 2, 00:14:16.467 "num_base_bdevs_operational": 2, 00:14:16.467 "base_bdevs_list": [ 00:14:16.467 { 00:14:16.467 "name": null, 00:14:16.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.467 "is_configured": false, 00:14:16.467 "data_offset": 2048, 00:14:16.467 "data_size": 63488 00:14:16.467 }, 00:14:16.467 { 00:14:16.467 "name": "BaseBdev2", 00:14:16.468 "uuid": "e63556c3-9028-48d5-aaae-cc7f9f2b2105", 00:14:16.468 "is_configured": true, 00:14:16.468 "data_offset": 2048, 00:14:16.468 "data_size": 63488 00:14:16.468 }, 00:14:16.468 { 00:14:16.468 "name": "BaseBdev3", 00:14:16.468 "uuid": "62dc1869-624d-4f66-94f7-20a4529316a5", 00:14:16.468 "is_configured": true, 00:14:16.468 "data_offset": 2048, 00:14:16.468 "data_size": 63488 00:14:16.468 } 00:14:16.468 ] 00:14:16.468 }' 00:14:16.468 13:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.468 13:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.035 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:17.035 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:17.035 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.035 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:17.035 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:17.035 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:17.035 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:17.294 [2024-07-25 13:14:27.710301] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:17.294 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:17.294 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:17.294 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.294 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:17.553 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:17.553 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:17.553 13:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:17.812 [2024-07-25 13:14:28.173595] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:17.812 [2024-07-25 13:14:28.173632] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc10710 name Existed_Raid, state offline 00:14:17.812 13:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:17.812 13:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:17.812 13:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.812 13:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:18.070 13:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:18.070 13:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:18.070 13:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:18.070 13:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:18.070 13:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:18.070 13:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:18.329 BaseBdev2 00:14:18.329 13:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:18.329 13:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:18.329 13:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:18.329 13:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:18.329 13:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:18.329 13:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:18.329 13:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:18.588 13:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:18.847 [ 00:14:18.847 { 00:14:18.847 "name": "BaseBdev2", 00:14:18.847 "aliases": [ 00:14:18.847 "2f2cecea-d020-47b8-b62f-ba311f2b318e" 00:14:18.847 ], 00:14:18.847 "product_name": "Malloc disk", 00:14:18.847 "block_size": 512, 00:14:18.847 "num_blocks": 65536, 00:14:18.847 "uuid": "2f2cecea-d020-47b8-b62f-ba311f2b318e", 00:14:18.847 "assigned_rate_limits": { 00:14:18.847 "rw_ios_per_sec": 0, 00:14:18.847 "rw_mbytes_per_sec": 0, 00:14:18.847 "r_mbytes_per_sec": 0, 00:14:18.847 "w_mbytes_per_sec": 0 00:14:18.847 }, 00:14:18.847 "claimed": false, 00:14:18.847 "zoned": false, 00:14:18.847 "supported_io_types": { 00:14:18.847 "read": true, 00:14:18.847 "write": true, 00:14:18.847 "unmap": true, 00:14:18.847 "flush": true, 00:14:18.847 "reset": true, 00:14:18.847 "nvme_admin": false, 00:14:18.847 "nvme_io": false, 00:14:18.847 "nvme_io_md": false, 00:14:18.847 "write_zeroes": true, 00:14:18.847 "zcopy": true, 00:14:18.847 "get_zone_info": false, 00:14:18.847 "zone_management": false, 00:14:18.847 "zone_append": false, 00:14:18.847 "compare": false, 00:14:18.847 "compare_and_write": false, 00:14:18.847 "abort": true, 00:14:18.847 "seek_hole": false, 00:14:18.847 "seek_data": false, 00:14:18.847 "copy": true, 00:14:18.847 "nvme_iov_md": false 00:14:18.847 }, 00:14:18.847 "memory_domains": [ 00:14:18.847 { 00:14:18.847 "dma_device_id": "system", 00:14:18.847 "dma_device_type": 1 00:14:18.847 }, 00:14:18.847 { 00:14:18.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.847 "dma_device_type": 2 00:14:18.847 } 00:14:18.847 ], 00:14:18.847 "driver_specific": {} 00:14:18.847 } 00:14:18.847 ] 00:14:18.847 13:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:18.847 13:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:18.847 13:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:18.847 13:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:19.106 BaseBdev3 00:14:19.106 13:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:19.106 13:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:19.106 13:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:19.106 13:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:19.106 13:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:19.106 13:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:19.106 13:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:19.106 13:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:19.410 [ 00:14:19.410 { 00:14:19.410 "name": "BaseBdev3", 00:14:19.410 "aliases": [ 00:14:19.410 "362f864e-7e57-4a66-a013-3c6a6f4de6cd" 00:14:19.410 ], 00:14:19.410 "product_name": "Malloc disk", 00:14:19.410 "block_size": 512, 00:14:19.410 "num_blocks": 65536, 00:14:19.410 "uuid": "362f864e-7e57-4a66-a013-3c6a6f4de6cd", 00:14:19.410 "assigned_rate_limits": { 00:14:19.410 "rw_ios_per_sec": 0, 00:14:19.410 "rw_mbytes_per_sec": 0, 00:14:19.410 "r_mbytes_per_sec": 0, 00:14:19.410 "w_mbytes_per_sec": 0 00:14:19.410 }, 00:14:19.410 "claimed": false, 00:14:19.410 "zoned": false, 00:14:19.410 "supported_io_types": { 00:14:19.410 "read": true, 00:14:19.410 "write": true, 00:14:19.410 "unmap": true, 00:14:19.410 "flush": true, 00:14:19.410 "reset": true, 00:14:19.410 "nvme_admin": false, 00:14:19.410 "nvme_io": false, 00:14:19.410 "nvme_io_md": false, 00:14:19.410 "write_zeroes": true, 00:14:19.410 "zcopy": true, 00:14:19.410 "get_zone_info": false, 00:14:19.410 "zone_management": false, 00:14:19.410 "zone_append": false, 00:14:19.410 "compare": false, 00:14:19.410 "compare_and_write": false, 00:14:19.410 "abort": true, 00:14:19.410 "seek_hole": false, 00:14:19.410 "seek_data": false, 00:14:19.410 "copy": true, 00:14:19.410 "nvme_iov_md": false 00:14:19.410 }, 00:14:19.410 "memory_domains": [ 00:14:19.410 { 00:14:19.410 "dma_device_id": "system", 00:14:19.410 "dma_device_type": 1 00:14:19.410 }, 00:14:19.410 { 00:14:19.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.410 "dma_device_type": 2 00:14:19.410 } 00:14:19.410 ], 00:14:19.410 "driver_specific": {} 00:14:19.410 } 00:14:19.410 ] 00:14:19.410 13:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:19.410 13:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:19.411 13:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:19.411 13:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:19.670 [2024-07-25 13:14:30.013110] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.670 [2024-07-25 13:14:30.013162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.670 [2024-07-25 13:14:30.013185] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.670 [2024-07-25 13:14:30.014429] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.670 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:19.670 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:19.670 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:19.670 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:19.670 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:19.670 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:19.670 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.670 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.670 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.670 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.670 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.670 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.928 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.928 "name": "Existed_Raid", 00:14:19.928 "uuid": "a09d5bb9-9f31-4c7f-986e-f38ccbf8c2ed", 00:14:19.928 "strip_size_kb": 64, 00:14:19.928 "state": "configuring", 00:14:19.928 "raid_level": "raid0", 00:14:19.928 "superblock": true, 00:14:19.928 "num_base_bdevs": 3, 00:14:19.928 "num_base_bdevs_discovered": 2, 00:14:19.928 "num_base_bdevs_operational": 3, 00:14:19.928 "base_bdevs_list": [ 00:14:19.928 { 00:14:19.928 "name": "BaseBdev1", 00:14:19.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.928 "is_configured": false, 00:14:19.928 "data_offset": 0, 00:14:19.928 "data_size": 0 00:14:19.928 }, 00:14:19.928 { 00:14:19.928 "name": "BaseBdev2", 00:14:19.928 "uuid": "2f2cecea-d020-47b8-b62f-ba311f2b318e", 00:14:19.928 "is_configured": true, 00:14:19.928 "data_offset": 2048, 00:14:19.928 "data_size": 63488 00:14:19.928 }, 00:14:19.928 { 00:14:19.928 "name": "BaseBdev3", 00:14:19.928 "uuid": "362f864e-7e57-4a66-a013-3c6a6f4de6cd", 00:14:19.928 "is_configured": true, 00:14:19.928 "data_offset": 2048, 00:14:19.928 "data_size": 63488 00:14:19.928 } 00:14:19.928 ] 00:14:19.928 }' 00:14:19.928 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.928 13:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:20.496 [2024-07-25 13:14:30.943543] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.496 13:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.754 13:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.754 "name": "Existed_Raid", 00:14:20.754 "uuid": "a09d5bb9-9f31-4c7f-986e-f38ccbf8c2ed", 00:14:20.754 "strip_size_kb": 64, 00:14:20.754 "state": "configuring", 00:14:20.754 "raid_level": "raid0", 00:14:20.754 "superblock": true, 00:14:20.754 "num_base_bdevs": 3, 00:14:20.754 "num_base_bdevs_discovered": 1, 00:14:20.754 "num_base_bdevs_operational": 3, 00:14:20.754 "base_bdevs_list": [ 00:14:20.754 { 00:14:20.754 "name": "BaseBdev1", 00:14:20.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.754 "is_configured": false, 00:14:20.754 "data_offset": 0, 00:14:20.754 "data_size": 0 00:14:20.754 }, 00:14:20.754 { 00:14:20.754 "name": null, 00:14:20.754 "uuid": "2f2cecea-d020-47b8-b62f-ba311f2b318e", 00:14:20.754 "is_configured": false, 00:14:20.754 "data_offset": 2048, 00:14:20.754 "data_size": 63488 00:14:20.754 }, 00:14:20.754 { 00:14:20.754 "name": "BaseBdev3", 00:14:20.754 "uuid": "362f864e-7e57-4a66-a013-3c6a6f4de6cd", 00:14:20.754 "is_configured": true, 00:14:20.754 "data_offset": 2048, 00:14:20.754 "data_size": 63488 00:14:20.754 } 00:14:20.754 ] 00:14:20.754 }' 00:14:20.754 13:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.754 13:14:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.319 13:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.319 13:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:21.577 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:21.577 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:21.836 [2024-07-25 13:14:32.226031] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.836 BaseBdev1 00:14:21.836 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:21.836 13:14:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:21.836 13:14:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.836 13:14:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:21.836 13:14:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.836 13:14:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.836 13:14:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.095 13:14:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.354 [ 00:14:22.354 { 00:14:22.354 "name": "BaseBdev1", 00:14:22.354 "aliases": [ 00:14:22.354 "db43cb04-88a8-4633-8da4-f7a7174df464" 00:14:22.354 ], 00:14:22.354 "product_name": "Malloc disk", 00:14:22.354 "block_size": 512, 00:14:22.354 "num_blocks": 65536, 00:14:22.354 "uuid": "db43cb04-88a8-4633-8da4-f7a7174df464", 00:14:22.354 "assigned_rate_limits": { 00:14:22.354 "rw_ios_per_sec": 0, 00:14:22.354 "rw_mbytes_per_sec": 0, 00:14:22.354 "r_mbytes_per_sec": 0, 00:14:22.354 "w_mbytes_per_sec": 0 00:14:22.354 }, 00:14:22.354 "claimed": true, 00:14:22.354 "claim_type": "exclusive_write", 00:14:22.354 "zoned": false, 00:14:22.354 "supported_io_types": { 00:14:22.354 "read": true, 00:14:22.354 "write": true, 00:14:22.354 "unmap": true, 00:14:22.354 "flush": true, 00:14:22.354 "reset": true, 00:14:22.354 "nvme_admin": false, 00:14:22.354 "nvme_io": false, 00:14:22.354 "nvme_io_md": false, 00:14:22.354 "write_zeroes": true, 00:14:22.354 "zcopy": true, 00:14:22.354 "get_zone_info": false, 00:14:22.354 "zone_management": false, 00:14:22.354 "zone_append": false, 00:14:22.354 "compare": false, 00:14:22.354 "compare_and_write": false, 00:14:22.354 "abort": true, 00:14:22.354 "seek_hole": false, 00:14:22.354 "seek_data": false, 00:14:22.354 "copy": true, 00:14:22.354 "nvme_iov_md": false 00:14:22.354 }, 00:14:22.354 "memory_domains": [ 00:14:22.354 { 00:14:22.354 "dma_device_id": "system", 00:14:22.354 "dma_device_type": 1 00:14:22.354 }, 00:14:22.354 { 00:14:22.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.354 "dma_device_type": 2 00:14:22.354 } 00:14:22.354 ], 00:14:22.354 "driver_specific": {} 00:14:22.354 } 00:14:22.354 ] 00:14:22.354 13:14:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:22.354 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:22.354 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:22.354 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:22.355 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:22.355 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:22.355 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:22.355 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.355 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.355 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.355 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.355 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.355 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.614 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.614 "name": "Existed_Raid", 00:14:22.614 "uuid": "a09d5bb9-9f31-4c7f-986e-f38ccbf8c2ed", 00:14:22.614 "strip_size_kb": 64, 00:14:22.614 "state": "configuring", 00:14:22.614 "raid_level": "raid0", 00:14:22.614 "superblock": true, 00:14:22.614 "num_base_bdevs": 3, 00:14:22.614 "num_base_bdevs_discovered": 2, 00:14:22.614 "num_base_bdevs_operational": 3, 00:14:22.614 "base_bdevs_list": [ 00:14:22.614 { 00:14:22.614 "name": "BaseBdev1", 00:14:22.614 "uuid": "db43cb04-88a8-4633-8da4-f7a7174df464", 00:14:22.614 "is_configured": true, 00:14:22.614 "data_offset": 2048, 00:14:22.614 "data_size": 63488 00:14:22.614 }, 00:14:22.614 { 00:14:22.614 "name": null, 00:14:22.614 "uuid": "2f2cecea-d020-47b8-b62f-ba311f2b318e", 00:14:22.614 "is_configured": false, 00:14:22.614 "data_offset": 2048, 00:14:22.614 "data_size": 63488 00:14:22.614 }, 00:14:22.614 { 00:14:22.614 "name": "BaseBdev3", 00:14:22.614 "uuid": "362f864e-7e57-4a66-a013-3c6a6f4de6cd", 00:14:22.614 "is_configured": true, 00:14:22.614 "data_offset": 2048, 00:14:22.614 "data_size": 63488 00:14:22.614 } 00:14:22.614 ] 00:14:22.614 }' 00:14:22.614 13:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.614 13:14:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.183 13:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.183 13:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:23.441 13:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:23.441 13:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:24.008 [2024-07-25 13:14:34.219319] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.008 "name": "Existed_Raid", 00:14:24.008 "uuid": "a09d5bb9-9f31-4c7f-986e-f38ccbf8c2ed", 00:14:24.008 "strip_size_kb": 64, 00:14:24.008 "state": "configuring", 00:14:24.008 "raid_level": "raid0", 00:14:24.008 "superblock": true, 00:14:24.008 "num_base_bdevs": 3, 00:14:24.008 "num_base_bdevs_discovered": 1, 00:14:24.008 "num_base_bdevs_operational": 3, 00:14:24.008 "base_bdevs_list": [ 00:14:24.008 { 00:14:24.008 "name": "BaseBdev1", 00:14:24.008 "uuid": "db43cb04-88a8-4633-8da4-f7a7174df464", 00:14:24.008 "is_configured": true, 00:14:24.008 "data_offset": 2048, 00:14:24.008 "data_size": 63488 00:14:24.008 }, 00:14:24.008 { 00:14:24.008 "name": null, 00:14:24.008 "uuid": "2f2cecea-d020-47b8-b62f-ba311f2b318e", 00:14:24.008 "is_configured": false, 00:14:24.008 "data_offset": 2048, 00:14:24.008 "data_size": 63488 00:14:24.008 }, 00:14:24.008 { 00:14:24.008 "name": null, 00:14:24.008 "uuid": "362f864e-7e57-4a66-a013-3c6a6f4de6cd", 00:14:24.008 "is_configured": false, 00:14:24.008 "data_offset": 2048, 00:14:24.008 "data_size": 63488 00:14:24.008 } 00:14:24.008 ] 00:14:24.008 }' 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.008 13:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.576 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.576 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.835 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:24.835 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:25.403 [2024-07-25 13:14:35.751367] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.403 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:25.403 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:25.403 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:25.403 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:25.403 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:25.403 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:25.403 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.403 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.403 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.403 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.403 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.403 13:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.660 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:25.660 "name": "Existed_Raid", 00:14:25.660 "uuid": "a09d5bb9-9f31-4c7f-986e-f38ccbf8c2ed", 00:14:25.660 "strip_size_kb": 64, 00:14:25.660 "state": "configuring", 00:14:25.660 "raid_level": "raid0", 00:14:25.660 "superblock": true, 00:14:25.660 "num_base_bdevs": 3, 00:14:25.660 "num_base_bdevs_discovered": 2, 00:14:25.660 "num_base_bdevs_operational": 3, 00:14:25.660 "base_bdevs_list": [ 00:14:25.660 { 00:14:25.660 "name": "BaseBdev1", 00:14:25.660 "uuid": "db43cb04-88a8-4633-8da4-f7a7174df464", 00:14:25.660 "is_configured": true, 00:14:25.660 "data_offset": 2048, 00:14:25.660 "data_size": 63488 00:14:25.660 }, 00:14:25.660 { 00:14:25.660 "name": null, 00:14:25.660 "uuid": "2f2cecea-d020-47b8-b62f-ba311f2b318e", 00:14:25.660 "is_configured": false, 00:14:25.660 "data_offset": 2048, 00:14:25.660 "data_size": 63488 00:14:25.660 }, 00:14:25.660 { 00:14:25.660 "name": "BaseBdev3", 00:14:25.660 "uuid": "362f864e-7e57-4a66-a013-3c6a6f4de6cd", 00:14:25.660 "is_configured": true, 00:14:25.660 "data_offset": 2048, 00:14:25.660 "data_size": 63488 00:14:25.660 } 00:14:25.660 ] 00:14:25.660 }' 00:14:25.660 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:25.660 13:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.224 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.224 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:26.481 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:26.481 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:26.481 [2024-07-25 13:14:36.950526] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.739 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:26.739 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:26.739 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:26.739 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:26.739 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:26.739 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:26.739 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.739 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.739 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.739 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.739 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.739 13:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.739 13:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.739 "name": "Existed_Raid", 00:14:26.739 "uuid": "a09d5bb9-9f31-4c7f-986e-f38ccbf8c2ed", 00:14:26.739 "strip_size_kb": 64, 00:14:26.739 "state": "configuring", 00:14:26.739 "raid_level": "raid0", 00:14:26.739 "superblock": true, 00:14:26.739 "num_base_bdevs": 3, 00:14:26.739 "num_base_bdevs_discovered": 1, 00:14:26.739 "num_base_bdevs_operational": 3, 00:14:26.739 "base_bdevs_list": [ 00:14:26.739 { 00:14:26.739 "name": null, 00:14:26.739 "uuid": "db43cb04-88a8-4633-8da4-f7a7174df464", 00:14:26.739 "is_configured": false, 00:14:26.739 "data_offset": 2048, 00:14:26.739 "data_size": 63488 00:14:26.739 }, 00:14:26.739 { 00:14:26.739 "name": null, 00:14:26.739 "uuid": "2f2cecea-d020-47b8-b62f-ba311f2b318e", 00:14:26.739 "is_configured": false, 00:14:26.739 "data_offset": 2048, 00:14:26.739 "data_size": 63488 00:14:26.739 }, 00:14:26.739 { 00:14:26.739 "name": "BaseBdev3", 00:14:26.739 "uuid": "362f864e-7e57-4a66-a013-3c6a6f4de6cd", 00:14:26.739 "is_configured": true, 00:14:26.739 "data_offset": 2048, 00:14:26.739 "data_size": 63488 00:14:26.739 } 00:14:26.739 ] 00:14:26.739 }' 00:14:26.739 13:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.739 13:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.304 13:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.304 13:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:27.562 13:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:27.562 13:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:27.820 [2024-07-25 13:14:38.203854] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.820 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:27.820 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:27.820 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:27.820 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:27.820 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:27.820 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:27.820 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.820 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.820 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.820 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.820 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.820 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.078 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.078 "name": "Existed_Raid", 00:14:28.078 "uuid": "a09d5bb9-9f31-4c7f-986e-f38ccbf8c2ed", 00:14:28.078 "strip_size_kb": 64, 00:14:28.078 "state": "configuring", 00:14:28.078 "raid_level": "raid0", 00:14:28.078 "superblock": true, 00:14:28.078 "num_base_bdevs": 3, 00:14:28.078 "num_base_bdevs_discovered": 2, 00:14:28.078 "num_base_bdevs_operational": 3, 00:14:28.078 "base_bdevs_list": [ 00:14:28.078 { 00:14:28.078 "name": null, 00:14:28.078 "uuid": "db43cb04-88a8-4633-8da4-f7a7174df464", 00:14:28.078 "is_configured": false, 00:14:28.078 "data_offset": 2048, 00:14:28.078 "data_size": 63488 00:14:28.078 }, 00:14:28.078 { 00:14:28.078 "name": "BaseBdev2", 00:14:28.078 "uuid": "2f2cecea-d020-47b8-b62f-ba311f2b318e", 00:14:28.078 "is_configured": true, 00:14:28.078 "data_offset": 2048, 00:14:28.078 "data_size": 63488 00:14:28.078 }, 00:14:28.078 { 00:14:28.078 "name": "BaseBdev3", 00:14:28.078 "uuid": "362f864e-7e57-4a66-a013-3c6a6f4de6cd", 00:14:28.078 "is_configured": true, 00:14:28.078 "data_offset": 2048, 00:14:28.078 "data_size": 63488 00:14:28.078 } 00:14:28.078 ] 00:14:28.078 }' 00:14:28.078 13:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.078 13:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.645 13:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.645 13:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:28.903 13:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:28.903 13:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.903 13:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:29.161 13:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u db43cb04-88a8-4633-8da4-f7a7174df464 00:14:29.728 [2024-07-25 13:14:39.959669] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:29.728 [2024-07-25 13:14:39.959799] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xdc3be0 00:14:29.728 [2024-07-25 13:14:39.959811] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:29.728 [2024-07-25 13:14:39.959976] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdc2ad0 00:14:29.728 [2024-07-25 13:14:39.960074] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xdc3be0 00:14:29.728 [2024-07-25 13:14:39.960083] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xdc3be0 00:14:29.728 [2024-07-25 13:14:39.960179] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.728 NewBaseBdev 00:14:29.728 13:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:29.728 13:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:29.728 13:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:29.728 13:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:29.728 13:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:29.728 13:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:29.728 13:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:29.728 13:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:30.295 [ 00:14:30.295 { 00:14:30.295 "name": "NewBaseBdev", 00:14:30.295 "aliases": [ 00:14:30.295 "db43cb04-88a8-4633-8da4-f7a7174df464" 00:14:30.295 ], 00:14:30.295 "product_name": "Malloc disk", 00:14:30.295 "block_size": 512, 00:14:30.295 "num_blocks": 65536, 00:14:30.295 "uuid": "db43cb04-88a8-4633-8da4-f7a7174df464", 00:14:30.295 "assigned_rate_limits": { 00:14:30.295 "rw_ios_per_sec": 0, 00:14:30.295 "rw_mbytes_per_sec": 0, 00:14:30.295 "r_mbytes_per_sec": 0, 00:14:30.295 "w_mbytes_per_sec": 0 00:14:30.295 }, 00:14:30.295 "claimed": true, 00:14:30.295 "claim_type": "exclusive_write", 00:14:30.295 "zoned": false, 00:14:30.295 "supported_io_types": { 00:14:30.295 "read": true, 00:14:30.295 "write": true, 00:14:30.295 "unmap": true, 00:14:30.295 "flush": true, 00:14:30.295 "reset": true, 00:14:30.295 "nvme_admin": false, 00:14:30.295 "nvme_io": false, 00:14:30.295 "nvme_io_md": false, 00:14:30.295 "write_zeroes": true, 00:14:30.295 "zcopy": true, 00:14:30.295 "get_zone_info": false, 00:14:30.295 "zone_management": false, 00:14:30.295 "zone_append": false, 00:14:30.295 "compare": false, 00:14:30.295 "compare_and_write": false, 00:14:30.295 "abort": true, 00:14:30.295 "seek_hole": false, 00:14:30.295 "seek_data": false, 00:14:30.295 "copy": true, 00:14:30.295 "nvme_iov_md": false 00:14:30.295 }, 00:14:30.295 "memory_domains": [ 00:14:30.295 { 00:14:30.295 "dma_device_id": "system", 00:14:30.295 "dma_device_type": 1 00:14:30.295 }, 00:14:30.295 { 00:14:30.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.295 "dma_device_type": 2 00:14:30.295 } 00:14:30.295 ], 00:14:30.295 "driver_specific": {} 00:14:30.295 } 00:14:30.295 ] 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.295 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.554 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:30.554 "name": "Existed_Raid", 00:14:30.554 "uuid": "a09d5bb9-9f31-4c7f-986e-f38ccbf8c2ed", 00:14:30.554 "strip_size_kb": 64, 00:14:30.554 "state": "online", 00:14:30.554 "raid_level": "raid0", 00:14:30.554 "superblock": true, 00:14:30.554 "num_base_bdevs": 3, 00:14:30.554 "num_base_bdevs_discovered": 3, 00:14:30.554 "num_base_bdevs_operational": 3, 00:14:30.554 "base_bdevs_list": [ 00:14:30.554 { 00:14:30.554 "name": "NewBaseBdev", 00:14:30.554 "uuid": "db43cb04-88a8-4633-8da4-f7a7174df464", 00:14:30.554 "is_configured": true, 00:14:30.554 "data_offset": 2048, 00:14:30.554 "data_size": 63488 00:14:30.554 }, 00:14:30.554 { 00:14:30.554 "name": "BaseBdev2", 00:14:30.554 "uuid": "2f2cecea-d020-47b8-b62f-ba311f2b318e", 00:14:30.554 "is_configured": true, 00:14:30.554 "data_offset": 2048, 00:14:30.554 "data_size": 63488 00:14:30.554 }, 00:14:30.554 { 00:14:30.554 "name": "BaseBdev3", 00:14:30.554 "uuid": "362f864e-7e57-4a66-a013-3c6a6f4de6cd", 00:14:30.554 "is_configured": true, 00:14:30.554 "data_offset": 2048, 00:14:30.554 "data_size": 63488 00:14:30.554 } 00:14:30.554 ] 00:14:30.554 }' 00:14:30.554 13:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:30.554 13:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.121 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:31.121 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:31.121 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:31.121 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:31.121 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:31.121 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:31.121 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:31.121 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:31.379 [2024-07-25 13:14:41.728623] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.379 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:31.379 "name": "Existed_Raid", 00:14:31.379 "aliases": [ 00:14:31.379 "a09d5bb9-9f31-4c7f-986e-f38ccbf8c2ed" 00:14:31.379 ], 00:14:31.379 "product_name": "Raid Volume", 00:14:31.379 "block_size": 512, 00:14:31.379 "num_blocks": 190464, 00:14:31.379 "uuid": "a09d5bb9-9f31-4c7f-986e-f38ccbf8c2ed", 00:14:31.379 "assigned_rate_limits": { 00:14:31.379 "rw_ios_per_sec": 0, 00:14:31.379 "rw_mbytes_per_sec": 0, 00:14:31.379 "r_mbytes_per_sec": 0, 00:14:31.379 "w_mbytes_per_sec": 0 00:14:31.379 }, 00:14:31.379 "claimed": false, 00:14:31.379 "zoned": false, 00:14:31.379 "supported_io_types": { 00:14:31.379 "read": true, 00:14:31.379 "write": true, 00:14:31.379 "unmap": true, 00:14:31.379 "flush": true, 00:14:31.379 "reset": true, 00:14:31.379 "nvme_admin": false, 00:14:31.379 "nvme_io": false, 00:14:31.379 "nvme_io_md": false, 00:14:31.379 "write_zeroes": true, 00:14:31.379 "zcopy": false, 00:14:31.379 "get_zone_info": false, 00:14:31.379 "zone_management": false, 00:14:31.379 "zone_append": false, 00:14:31.379 "compare": false, 00:14:31.379 "compare_and_write": false, 00:14:31.379 "abort": false, 00:14:31.379 "seek_hole": false, 00:14:31.379 "seek_data": false, 00:14:31.379 "copy": false, 00:14:31.379 "nvme_iov_md": false 00:14:31.379 }, 00:14:31.379 "memory_domains": [ 00:14:31.379 { 00:14:31.379 "dma_device_id": "system", 00:14:31.379 "dma_device_type": 1 00:14:31.379 }, 00:14:31.379 { 00:14:31.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.379 "dma_device_type": 2 00:14:31.379 }, 00:14:31.379 { 00:14:31.379 "dma_device_id": "system", 00:14:31.379 "dma_device_type": 1 00:14:31.379 }, 00:14:31.379 { 00:14:31.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.379 "dma_device_type": 2 00:14:31.379 }, 00:14:31.379 { 00:14:31.379 "dma_device_id": "system", 00:14:31.379 "dma_device_type": 1 00:14:31.379 }, 00:14:31.379 { 00:14:31.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.379 "dma_device_type": 2 00:14:31.379 } 00:14:31.379 ], 00:14:31.379 "driver_specific": { 00:14:31.379 "raid": { 00:14:31.379 "uuid": "a09d5bb9-9f31-4c7f-986e-f38ccbf8c2ed", 00:14:31.379 "strip_size_kb": 64, 00:14:31.379 "state": "online", 00:14:31.379 "raid_level": "raid0", 00:14:31.379 "superblock": true, 00:14:31.379 "num_base_bdevs": 3, 00:14:31.379 "num_base_bdevs_discovered": 3, 00:14:31.379 "num_base_bdevs_operational": 3, 00:14:31.379 "base_bdevs_list": [ 00:14:31.379 { 00:14:31.379 "name": "NewBaseBdev", 00:14:31.379 "uuid": "db43cb04-88a8-4633-8da4-f7a7174df464", 00:14:31.379 "is_configured": true, 00:14:31.379 "data_offset": 2048, 00:14:31.379 "data_size": 63488 00:14:31.379 }, 00:14:31.379 { 00:14:31.379 "name": "BaseBdev2", 00:14:31.379 "uuid": "2f2cecea-d020-47b8-b62f-ba311f2b318e", 00:14:31.379 "is_configured": true, 00:14:31.379 "data_offset": 2048, 00:14:31.379 "data_size": 63488 00:14:31.379 }, 00:14:31.379 { 00:14:31.379 "name": "BaseBdev3", 00:14:31.379 "uuid": "362f864e-7e57-4a66-a013-3c6a6f4de6cd", 00:14:31.379 "is_configured": true, 00:14:31.379 "data_offset": 2048, 00:14:31.379 "data_size": 63488 00:14:31.379 } 00:14:31.380 ] 00:14:31.380 } 00:14:31.380 } 00:14:31.380 }' 00:14:31.380 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:31.380 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:31.380 BaseBdev2 00:14:31.380 BaseBdev3' 00:14:31.380 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:31.380 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:31.380 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:31.638 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:31.638 "name": "NewBaseBdev", 00:14:31.638 "aliases": [ 00:14:31.638 "db43cb04-88a8-4633-8da4-f7a7174df464" 00:14:31.638 ], 00:14:31.638 "product_name": "Malloc disk", 00:14:31.638 "block_size": 512, 00:14:31.638 "num_blocks": 65536, 00:14:31.638 "uuid": "db43cb04-88a8-4633-8da4-f7a7174df464", 00:14:31.638 "assigned_rate_limits": { 00:14:31.638 "rw_ios_per_sec": 0, 00:14:31.638 "rw_mbytes_per_sec": 0, 00:14:31.638 "r_mbytes_per_sec": 0, 00:14:31.638 "w_mbytes_per_sec": 0 00:14:31.638 }, 00:14:31.638 "claimed": true, 00:14:31.638 "claim_type": "exclusive_write", 00:14:31.638 "zoned": false, 00:14:31.638 "supported_io_types": { 00:14:31.638 "read": true, 00:14:31.638 "write": true, 00:14:31.638 "unmap": true, 00:14:31.638 "flush": true, 00:14:31.638 "reset": true, 00:14:31.638 "nvme_admin": false, 00:14:31.638 "nvme_io": false, 00:14:31.638 "nvme_io_md": false, 00:14:31.638 "write_zeroes": true, 00:14:31.638 "zcopy": true, 00:14:31.638 "get_zone_info": false, 00:14:31.638 "zone_management": false, 00:14:31.638 "zone_append": false, 00:14:31.638 "compare": false, 00:14:31.638 "compare_and_write": false, 00:14:31.638 "abort": true, 00:14:31.638 "seek_hole": false, 00:14:31.638 "seek_data": false, 00:14:31.638 "copy": true, 00:14:31.638 "nvme_iov_md": false 00:14:31.638 }, 00:14:31.638 "memory_domains": [ 00:14:31.638 { 00:14:31.638 "dma_device_id": "system", 00:14:31.638 "dma_device_type": 1 00:14:31.638 }, 00:14:31.638 { 00:14:31.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.638 "dma_device_type": 2 00:14:31.638 } 00:14:31.638 ], 00:14:31.638 "driver_specific": {} 00:14:31.638 }' 00:14:31.638 13:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.638 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.638 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:31.638 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.638 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.896 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:31.896 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.896 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.896 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:31.896 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.896 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.896 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:31.896 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:31.896 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:31.896 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:32.154 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:32.155 "name": "BaseBdev2", 00:14:32.155 "aliases": [ 00:14:32.155 "2f2cecea-d020-47b8-b62f-ba311f2b318e" 00:14:32.155 ], 00:14:32.155 "product_name": "Malloc disk", 00:14:32.155 "block_size": 512, 00:14:32.155 "num_blocks": 65536, 00:14:32.155 "uuid": "2f2cecea-d020-47b8-b62f-ba311f2b318e", 00:14:32.155 "assigned_rate_limits": { 00:14:32.155 "rw_ios_per_sec": 0, 00:14:32.155 "rw_mbytes_per_sec": 0, 00:14:32.155 "r_mbytes_per_sec": 0, 00:14:32.155 "w_mbytes_per_sec": 0 00:14:32.155 }, 00:14:32.155 "claimed": true, 00:14:32.155 "claim_type": "exclusive_write", 00:14:32.155 "zoned": false, 00:14:32.155 "supported_io_types": { 00:14:32.155 "read": true, 00:14:32.155 "write": true, 00:14:32.155 "unmap": true, 00:14:32.155 "flush": true, 00:14:32.155 "reset": true, 00:14:32.155 "nvme_admin": false, 00:14:32.155 "nvme_io": false, 00:14:32.155 "nvme_io_md": false, 00:14:32.155 "write_zeroes": true, 00:14:32.155 "zcopy": true, 00:14:32.155 "get_zone_info": false, 00:14:32.155 "zone_management": false, 00:14:32.155 "zone_append": false, 00:14:32.155 "compare": false, 00:14:32.155 "compare_and_write": false, 00:14:32.155 "abort": true, 00:14:32.155 "seek_hole": false, 00:14:32.155 "seek_data": false, 00:14:32.155 "copy": true, 00:14:32.155 "nvme_iov_md": false 00:14:32.155 }, 00:14:32.155 "memory_domains": [ 00:14:32.155 { 00:14:32.155 "dma_device_id": "system", 00:14:32.155 "dma_device_type": 1 00:14:32.155 }, 00:14:32.155 { 00:14:32.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.155 "dma_device_type": 2 00:14:32.155 } 00:14:32.155 ], 00:14:32.155 "driver_specific": {} 00:14:32.155 }' 00:14:32.155 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.155 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.155 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:32.155 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.413 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.413 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:32.413 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.413 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.413 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:32.413 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:32.413 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:32.413 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:32.413 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:32.413 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:32.413 13:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:32.671 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:32.671 "name": "BaseBdev3", 00:14:32.671 "aliases": [ 00:14:32.671 "362f864e-7e57-4a66-a013-3c6a6f4de6cd" 00:14:32.671 ], 00:14:32.671 "product_name": "Malloc disk", 00:14:32.671 "block_size": 512, 00:14:32.671 "num_blocks": 65536, 00:14:32.671 "uuid": "362f864e-7e57-4a66-a013-3c6a6f4de6cd", 00:14:32.671 "assigned_rate_limits": { 00:14:32.671 "rw_ios_per_sec": 0, 00:14:32.671 "rw_mbytes_per_sec": 0, 00:14:32.671 "r_mbytes_per_sec": 0, 00:14:32.671 "w_mbytes_per_sec": 0 00:14:32.671 }, 00:14:32.671 "claimed": true, 00:14:32.671 "claim_type": "exclusive_write", 00:14:32.671 "zoned": false, 00:14:32.671 "supported_io_types": { 00:14:32.671 "read": true, 00:14:32.671 "write": true, 00:14:32.671 "unmap": true, 00:14:32.671 "flush": true, 00:14:32.671 "reset": true, 00:14:32.671 "nvme_admin": false, 00:14:32.671 "nvme_io": false, 00:14:32.671 "nvme_io_md": false, 00:14:32.671 "write_zeroes": true, 00:14:32.671 "zcopy": true, 00:14:32.671 "get_zone_info": false, 00:14:32.671 "zone_management": false, 00:14:32.671 "zone_append": false, 00:14:32.671 "compare": false, 00:14:32.671 "compare_and_write": false, 00:14:32.671 "abort": true, 00:14:32.671 "seek_hole": false, 00:14:32.671 "seek_data": false, 00:14:32.671 "copy": true, 00:14:32.671 "nvme_iov_md": false 00:14:32.671 }, 00:14:32.671 "memory_domains": [ 00:14:32.671 { 00:14:32.671 "dma_device_id": "system", 00:14:32.671 "dma_device_type": 1 00:14:32.671 }, 00:14:32.671 { 00:14:32.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.671 "dma_device_type": 2 00:14:32.671 } 00:14:32.671 ], 00:14:32.671 "driver_specific": {} 00:14:32.671 }' 00:14:32.671 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.671 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.929 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:32.929 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.929 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.929 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:32.930 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.930 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.930 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:32.930 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:32.930 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.187 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:33.187 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:33.444 [2024-07-25 13:14:43.926180] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:33.444 [2024-07-25 13:14:43.926203] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.444 [2024-07-25 13:14:43.926251] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.444 [2024-07-25 13:14:43.926295] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.444 [2024-07-25 13:14:43.926306] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdc3be0 name Existed_Raid, state offline 00:14:33.718 13:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 855937 00:14:33.718 13:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 855937 ']' 00:14:33.718 13:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 855937 00:14:33.718 13:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:33.718 13:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.718 13:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 855937 00:14:33.718 13:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.718 13:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.718 13:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 855937' 00:14:33.718 killing process with pid 855937 00:14:33.718 13:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 855937 00:14:33.718 [2024-07-25 13:14:44.014710] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.718 13:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 855937 00:14:33.718 [2024-07-25 13:14:44.037742] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.989 13:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:33.989 00:14:33.989 real 0m29.109s 00:14:33.989 user 0m53.438s 00:14:33.989 sys 0m5.118s 00:14:33.989 13:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.989 13:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.989 ************************************ 00:14:33.989 END TEST raid_state_function_test_sb 00:14:33.989 ************************************ 00:14:33.989 13:14:44 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:14:33.989 13:14:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:33.989 13:14:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.989 13:14:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.989 ************************************ 00:14:33.989 START TEST raid_superblock_test 00:14:33.989 ************************************ 00:14:33.989 13:14:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:14:33.989 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:14:33.989 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:14:33.989 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:14:33.989 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:14:33.989 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:14:33.989 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:14:33.989 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=861559 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 861559 /var/tmp/spdk-raid.sock 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 861559 ']' 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:33.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.990 13:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:33.990 [2024-07-25 13:14:44.367032] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:14:33.990 [2024-07-25 13:14:44.367088] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861559 ] 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:33.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:33.990 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:34.249 [2024-07-25 13:14:44.499824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.249 [2024-07-25 13:14:44.586164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.249 [2024-07-25 13:14:44.641784] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.249 [2024-07-25 13:14:44.641810] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.815 13:14:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.815 13:14:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:34.815 13:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:14:34.815 13:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:34.815 13:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:14:34.815 13:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:14:34.815 13:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:34.815 13:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:34.815 13:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:34.815 13:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:34.815 13:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:35.383 malloc1 00:14:35.383 13:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:35.642 [2024-07-25 13:14:45.993768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:35.642 [2024-07-25 13:14:45.993812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.642 [2024-07-25 13:14:45.993830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a722f0 00:14:35.642 [2024-07-25 13:14:45.993842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.642 [2024-07-25 13:14:45.995447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.642 [2024-07-25 13:14:45.995475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:35.642 pt1 00:14:35.642 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:35.642 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:35.642 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:14:35.642 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:14:35.642 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:35.642 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:35.642 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:35.642 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:35.642 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:36.209 malloc2 00:14:36.209 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:36.468 [2024-07-25 13:14:46.736249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:36.468 [2024-07-25 13:14:46.736292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.468 [2024-07-25 13:14:46.736307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c09f70 00:14:36.468 [2024-07-25 13:14:46.736319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.468 [2024-07-25 13:14:46.737749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.468 [2024-07-25 13:14:46.737776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:36.468 pt2 00:14:36.468 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:36.468 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:36.468 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:14:36.468 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:14:36.468 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:36.468 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.468 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.468 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.468 13:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:37.035 malloc3 00:14:37.035 13:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:37.035 [2024-07-25 13:14:47.478604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:37.035 [2024-07-25 13:14:47.478649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.035 [2024-07-25 13:14:47.478666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0d830 00:14:37.035 [2024-07-25 13:14:47.478677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.035 [2024-07-25 13:14:47.480044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.035 [2024-07-25 13:14:47.480071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:37.035 pt3 00:14:37.035 13:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:37.035 13:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:37.035 13:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:37.602 [2024-07-25 13:14:47.979928] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:37.602 [2024-07-25 13:14:47.981112] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.602 [2024-07-25 13:14:47.981172] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:37.602 [2024-07-25 13:14:47.981295] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c0cab0 00:14:37.602 [2024-07-25 13:14:47.981305] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:37.602 [2024-07-25 13:14:47.981491] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c11d20 00:14:37.602 [2024-07-25 13:14:47.981613] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c0cab0 00:14:37.602 [2024-07-25 13:14:47.981622] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c0cab0 00:14:37.602 [2024-07-25 13:14:47.981723] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.602 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:37.602 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:37.602 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:37.602 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:37.602 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:37.602 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:37.602 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.602 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.602 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.602 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.602 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.602 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.860 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.860 "name": "raid_bdev1", 00:14:37.860 "uuid": "b345b396-a1b6-446a-9ef5-f2c5358a3a73", 00:14:37.860 "strip_size_kb": 64, 00:14:37.860 "state": "online", 00:14:37.860 "raid_level": "raid0", 00:14:37.860 "superblock": true, 00:14:37.860 "num_base_bdevs": 3, 00:14:37.860 "num_base_bdevs_discovered": 3, 00:14:37.860 "num_base_bdevs_operational": 3, 00:14:37.860 "base_bdevs_list": [ 00:14:37.860 { 00:14:37.860 "name": "pt1", 00:14:37.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.860 "is_configured": true, 00:14:37.860 "data_offset": 2048, 00:14:37.860 "data_size": 63488 00:14:37.860 }, 00:14:37.860 { 00:14:37.860 "name": "pt2", 00:14:37.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.860 "is_configured": true, 00:14:37.860 "data_offset": 2048, 00:14:37.860 "data_size": 63488 00:14:37.860 }, 00:14:37.860 { 00:14:37.860 "name": "pt3", 00:14:37.860 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.860 "is_configured": true, 00:14:37.860 "data_offset": 2048, 00:14:37.860 "data_size": 63488 00:14:37.860 } 00:14:37.860 ] 00:14:37.860 }' 00:14:37.860 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.860 13:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.425 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:14:38.425 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:38.425 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:38.425 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:38.425 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:38.425 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:38.425 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:38.425 13:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:38.683 [2024-07-25 13:14:49.030914] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.683 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:38.683 "name": "raid_bdev1", 00:14:38.683 "aliases": [ 00:14:38.683 "b345b396-a1b6-446a-9ef5-f2c5358a3a73" 00:14:38.683 ], 00:14:38.683 "product_name": "Raid Volume", 00:14:38.683 "block_size": 512, 00:14:38.683 "num_blocks": 190464, 00:14:38.683 "uuid": "b345b396-a1b6-446a-9ef5-f2c5358a3a73", 00:14:38.683 "assigned_rate_limits": { 00:14:38.683 "rw_ios_per_sec": 0, 00:14:38.683 "rw_mbytes_per_sec": 0, 00:14:38.683 "r_mbytes_per_sec": 0, 00:14:38.683 "w_mbytes_per_sec": 0 00:14:38.683 }, 00:14:38.683 "claimed": false, 00:14:38.683 "zoned": false, 00:14:38.683 "supported_io_types": { 00:14:38.683 "read": true, 00:14:38.683 "write": true, 00:14:38.683 "unmap": true, 00:14:38.683 "flush": true, 00:14:38.683 "reset": true, 00:14:38.683 "nvme_admin": false, 00:14:38.683 "nvme_io": false, 00:14:38.683 "nvme_io_md": false, 00:14:38.683 "write_zeroes": true, 00:14:38.683 "zcopy": false, 00:14:38.683 "get_zone_info": false, 00:14:38.683 "zone_management": false, 00:14:38.683 "zone_append": false, 00:14:38.683 "compare": false, 00:14:38.683 "compare_and_write": false, 00:14:38.683 "abort": false, 00:14:38.683 "seek_hole": false, 00:14:38.683 "seek_data": false, 00:14:38.683 "copy": false, 00:14:38.683 "nvme_iov_md": false 00:14:38.683 }, 00:14:38.683 "memory_domains": [ 00:14:38.683 { 00:14:38.683 "dma_device_id": "system", 00:14:38.683 "dma_device_type": 1 00:14:38.683 }, 00:14:38.683 { 00:14:38.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.683 "dma_device_type": 2 00:14:38.683 }, 00:14:38.683 { 00:14:38.683 "dma_device_id": "system", 00:14:38.683 "dma_device_type": 1 00:14:38.683 }, 00:14:38.683 { 00:14:38.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.683 "dma_device_type": 2 00:14:38.683 }, 00:14:38.683 { 00:14:38.683 "dma_device_id": "system", 00:14:38.683 "dma_device_type": 1 00:14:38.683 }, 00:14:38.683 { 00:14:38.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.683 "dma_device_type": 2 00:14:38.683 } 00:14:38.683 ], 00:14:38.683 "driver_specific": { 00:14:38.683 "raid": { 00:14:38.683 "uuid": "b345b396-a1b6-446a-9ef5-f2c5358a3a73", 00:14:38.683 "strip_size_kb": 64, 00:14:38.683 "state": "online", 00:14:38.683 "raid_level": "raid0", 00:14:38.683 "superblock": true, 00:14:38.683 "num_base_bdevs": 3, 00:14:38.683 "num_base_bdevs_discovered": 3, 00:14:38.683 "num_base_bdevs_operational": 3, 00:14:38.683 "base_bdevs_list": [ 00:14:38.683 { 00:14:38.683 "name": "pt1", 00:14:38.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.683 "is_configured": true, 00:14:38.683 "data_offset": 2048, 00:14:38.683 "data_size": 63488 00:14:38.683 }, 00:14:38.683 { 00:14:38.683 "name": "pt2", 00:14:38.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.683 "is_configured": true, 00:14:38.683 "data_offset": 2048, 00:14:38.683 "data_size": 63488 00:14:38.683 }, 00:14:38.683 { 00:14:38.683 "name": "pt3", 00:14:38.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.683 "is_configured": true, 00:14:38.683 "data_offset": 2048, 00:14:38.683 "data_size": 63488 00:14:38.683 } 00:14:38.683 ] 00:14:38.683 } 00:14:38.683 } 00:14:38.683 }' 00:14:38.683 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:38.683 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:38.683 pt2 00:14:38.683 pt3' 00:14:38.683 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:38.683 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:38.683 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:38.941 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:38.941 "name": "pt1", 00:14:38.941 "aliases": [ 00:14:38.941 "00000000-0000-0000-0000-000000000001" 00:14:38.941 ], 00:14:38.941 "product_name": "passthru", 00:14:38.941 "block_size": 512, 00:14:38.941 "num_blocks": 65536, 00:14:38.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.941 "assigned_rate_limits": { 00:14:38.941 "rw_ios_per_sec": 0, 00:14:38.941 "rw_mbytes_per_sec": 0, 00:14:38.941 "r_mbytes_per_sec": 0, 00:14:38.941 "w_mbytes_per_sec": 0 00:14:38.941 }, 00:14:38.941 "claimed": true, 00:14:38.941 "claim_type": "exclusive_write", 00:14:38.941 "zoned": false, 00:14:38.941 "supported_io_types": { 00:14:38.941 "read": true, 00:14:38.941 "write": true, 00:14:38.941 "unmap": true, 00:14:38.941 "flush": true, 00:14:38.941 "reset": true, 00:14:38.941 "nvme_admin": false, 00:14:38.941 "nvme_io": false, 00:14:38.941 "nvme_io_md": false, 00:14:38.941 "write_zeroes": true, 00:14:38.941 "zcopy": true, 00:14:38.941 "get_zone_info": false, 00:14:38.941 "zone_management": false, 00:14:38.941 "zone_append": false, 00:14:38.941 "compare": false, 00:14:38.941 "compare_and_write": false, 00:14:38.941 "abort": true, 00:14:38.941 "seek_hole": false, 00:14:38.941 "seek_data": false, 00:14:38.941 "copy": true, 00:14:38.941 "nvme_iov_md": false 00:14:38.941 }, 00:14:38.941 "memory_domains": [ 00:14:38.941 { 00:14:38.941 "dma_device_id": "system", 00:14:38.941 "dma_device_type": 1 00:14:38.941 }, 00:14:38.941 { 00:14:38.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.941 "dma_device_type": 2 00:14:38.941 } 00:14:38.941 ], 00:14:38.941 "driver_specific": { 00:14:38.941 "passthru": { 00:14:38.941 "name": "pt1", 00:14:38.941 "base_bdev_name": "malloc1" 00:14:38.941 } 00:14:38.941 } 00:14:38.941 }' 00:14:38.941 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:38.941 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:38.941 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:38.941 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.199 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.199 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:39.199 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.199 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.199 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.199 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.199 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.199 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.199 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:39.199 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:39.199 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:39.456 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:39.456 "name": "pt2", 00:14:39.456 "aliases": [ 00:14:39.456 "00000000-0000-0000-0000-000000000002" 00:14:39.456 ], 00:14:39.456 "product_name": "passthru", 00:14:39.456 "block_size": 512, 00:14:39.456 "num_blocks": 65536, 00:14:39.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.456 "assigned_rate_limits": { 00:14:39.456 "rw_ios_per_sec": 0, 00:14:39.456 "rw_mbytes_per_sec": 0, 00:14:39.456 "r_mbytes_per_sec": 0, 00:14:39.456 "w_mbytes_per_sec": 0 00:14:39.456 }, 00:14:39.456 "claimed": true, 00:14:39.457 "claim_type": "exclusive_write", 00:14:39.457 "zoned": false, 00:14:39.457 "supported_io_types": { 00:14:39.457 "read": true, 00:14:39.457 "write": true, 00:14:39.457 "unmap": true, 00:14:39.457 "flush": true, 00:14:39.457 "reset": true, 00:14:39.457 "nvme_admin": false, 00:14:39.457 "nvme_io": false, 00:14:39.457 "nvme_io_md": false, 00:14:39.457 "write_zeroes": true, 00:14:39.457 "zcopy": true, 00:14:39.457 "get_zone_info": false, 00:14:39.457 "zone_management": false, 00:14:39.457 "zone_append": false, 00:14:39.457 "compare": false, 00:14:39.457 "compare_and_write": false, 00:14:39.457 "abort": true, 00:14:39.457 "seek_hole": false, 00:14:39.457 "seek_data": false, 00:14:39.457 "copy": true, 00:14:39.457 "nvme_iov_md": false 00:14:39.457 }, 00:14:39.457 "memory_domains": [ 00:14:39.457 { 00:14:39.457 "dma_device_id": "system", 00:14:39.457 "dma_device_type": 1 00:14:39.457 }, 00:14:39.457 { 00:14:39.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.457 "dma_device_type": 2 00:14:39.457 } 00:14:39.457 ], 00:14:39.457 "driver_specific": { 00:14:39.457 "passthru": { 00:14:39.457 "name": "pt2", 00:14:39.457 "base_bdev_name": "malloc2" 00:14:39.457 } 00:14:39.457 } 00:14:39.457 }' 00:14:39.457 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.457 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.715 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:39.715 13:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.715 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.715 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:39.715 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.715 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.715 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.715 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.715 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.973 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.973 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:39.973 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:39.974 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:39.974 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:39.974 "name": "pt3", 00:14:39.974 "aliases": [ 00:14:39.974 "00000000-0000-0000-0000-000000000003" 00:14:39.974 ], 00:14:39.974 "product_name": "passthru", 00:14:39.974 "block_size": 512, 00:14:39.974 "num_blocks": 65536, 00:14:39.974 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.974 "assigned_rate_limits": { 00:14:39.974 "rw_ios_per_sec": 0, 00:14:39.974 "rw_mbytes_per_sec": 0, 00:14:39.974 "r_mbytes_per_sec": 0, 00:14:39.974 "w_mbytes_per_sec": 0 00:14:39.974 }, 00:14:39.974 "claimed": true, 00:14:39.974 "claim_type": "exclusive_write", 00:14:39.974 "zoned": false, 00:14:39.974 "supported_io_types": { 00:14:39.974 "read": true, 00:14:39.974 "write": true, 00:14:39.974 "unmap": true, 00:14:39.974 "flush": true, 00:14:39.974 "reset": true, 00:14:39.974 "nvme_admin": false, 00:14:39.974 "nvme_io": false, 00:14:39.974 "nvme_io_md": false, 00:14:39.974 "write_zeroes": true, 00:14:39.974 "zcopy": true, 00:14:39.974 "get_zone_info": false, 00:14:39.974 "zone_management": false, 00:14:39.974 "zone_append": false, 00:14:39.974 "compare": false, 00:14:39.974 "compare_and_write": false, 00:14:39.974 "abort": true, 00:14:39.974 "seek_hole": false, 00:14:39.974 "seek_data": false, 00:14:39.974 "copy": true, 00:14:39.974 "nvme_iov_md": false 00:14:39.974 }, 00:14:39.974 "memory_domains": [ 00:14:39.974 { 00:14:39.974 "dma_device_id": "system", 00:14:39.974 "dma_device_type": 1 00:14:39.974 }, 00:14:39.974 { 00:14:39.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.974 "dma_device_type": 2 00:14:39.974 } 00:14:39.974 ], 00:14:39.974 "driver_specific": { 00:14:39.974 "passthru": { 00:14:39.974 "name": "pt3", 00:14:39.974 "base_bdev_name": "malloc3" 00:14:39.974 } 00:14:39.974 } 00:14:39.974 }' 00:14:40.232 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:40.232 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:40.232 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:40.232 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:40.232 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:40.232 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:40.232 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:40.232 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:40.232 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:40.232 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:40.490 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:40.490 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:40.490 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:40.490 13:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:14:40.748 [2024-07-25 13:14:51.004164] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.748 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=b345b396-a1b6-446a-9ef5-f2c5358a3a73 00:14:40.748 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z b345b396-a1b6-446a-9ef5-f2c5358a3a73 ']' 00:14:40.748 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:40.748 [2024-07-25 13:14:51.228494] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.748 [2024-07-25 13:14:51.228509] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.748 [2024-07-25 13:14:51.228549] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.748 [2024-07-25 13:14:51.228595] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.748 [2024-07-25 13:14:51.228605] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c0cab0 name raid_bdev1, state offline 00:14:41.006 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.006 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:14:41.006 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:14:41.006 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:14:41.006 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.006 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:41.264 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.264 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:41.522 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.522 13:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:41.779 13:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:41.779 13:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:42.037 13:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:14:42.037 13:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:42.037 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:42.037 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:42.037 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:42.037 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.037 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:42.037 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.037 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:42.037 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.037 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:42.037 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:42.038 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:42.296 [2024-07-25 13:14:52.592026] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:42.296 [2024-07-25 13:14:52.593284] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:42.296 [2024-07-25 13:14:52.593332] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:42.296 [2024-07-25 13:14:52.593373] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:42.296 [2024-07-25 13:14:52.593409] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:42.296 [2024-07-25 13:14:52.593430] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:42.296 [2024-07-25 13:14:52.593447] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.296 [2024-07-25 13:14:52.593456] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c0ca80 name raid_bdev1, state configuring 00:14:42.296 request: 00:14:42.296 { 00:14:42.296 "name": "raid_bdev1", 00:14:42.296 "raid_level": "raid0", 00:14:42.296 "base_bdevs": [ 00:14:42.296 "malloc1", 00:14:42.296 "malloc2", 00:14:42.296 "malloc3" 00:14:42.296 ], 00:14:42.296 "strip_size_kb": 64, 00:14:42.296 "superblock": false, 00:14:42.296 "method": "bdev_raid_create", 00:14:42.296 "req_id": 1 00:14:42.296 } 00:14:42.296 Got JSON-RPC error response 00:14:42.296 response: 00:14:42.296 { 00:14:42.296 "code": -17, 00:14:42.296 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:42.296 } 00:14:42.296 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:42.296 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:42.296 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:42.296 13:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:42.296 13:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.296 13:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:14:42.555 13:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:14:42.555 13:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:14:42.555 13:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:42.813 [2024-07-25 13:14:53.045174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:42.813 [2024-07-25 13:14:53.045216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.813 [2024-07-25 13:14:53.045233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0ca80 00:14:42.813 [2024-07-25 13:14:53.045245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.813 [2024-07-25 13:14:53.046728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.813 [2024-07-25 13:14:53.046756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:42.813 [2024-07-25 13:14:53.046817] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:42.813 [2024-07-25 13:14:53.046839] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:42.813 pt1 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.813 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.813 "name": "raid_bdev1", 00:14:42.813 "uuid": "b345b396-a1b6-446a-9ef5-f2c5358a3a73", 00:14:42.813 "strip_size_kb": 64, 00:14:42.813 "state": "configuring", 00:14:42.813 "raid_level": "raid0", 00:14:42.813 "superblock": true, 00:14:42.813 "num_base_bdevs": 3, 00:14:42.813 "num_base_bdevs_discovered": 1, 00:14:42.813 "num_base_bdevs_operational": 3, 00:14:42.813 "base_bdevs_list": [ 00:14:42.813 { 00:14:42.813 "name": "pt1", 00:14:42.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.813 "is_configured": true, 00:14:42.813 "data_offset": 2048, 00:14:42.813 "data_size": 63488 00:14:42.813 }, 00:14:42.813 { 00:14:42.813 "name": null, 00:14:42.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.813 "is_configured": false, 00:14:42.813 "data_offset": 2048, 00:14:42.813 "data_size": 63488 00:14:42.813 }, 00:14:42.813 { 00:14:42.813 "name": null, 00:14:42.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.813 "is_configured": false, 00:14:42.814 "data_offset": 2048, 00:14:42.814 "data_size": 63488 00:14:42.814 } 00:14:42.814 ] 00:14:42.814 }' 00:14:42.814 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.814 13:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.748 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:14:43.748 13:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:43.748 [2024-07-25 13:14:54.079893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:43.748 [2024-07-25 13:14:54.079937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.748 [2024-07-25 13:14:54.079957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0a1a0 00:14:43.748 [2024-07-25 13:14:54.079968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.748 [2024-07-25 13:14:54.080284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.748 [2024-07-25 13:14:54.080306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:43.748 [2024-07-25 13:14:54.080362] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:43.748 [2024-07-25 13:14:54.080379] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:43.748 pt2 00:14:43.748 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:44.007 [2024-07-25 13:14:54.252356] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:44.007 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:44.007 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:44.007 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:44.007 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:44.007 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:44.007 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:44.007 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.007 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.007 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.007 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.007 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.007 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.265 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.265 "name": "raid_bdev1", 00:14:44.265 "uuid": "b345b396-a1b6-446a-9ef5-f2c5358a3a73", 00:14:44.265 "strip_size_kb": 64, 00:14:44.265 "state": "configuring", 00:14:44.265 "raid_level": "raid0", 00:14:44.265 "superblock": true, 00:14:44.265 "num_base_bdevs": 3, 00:14:44.265 "num_base_bdevs_discovered": 1, 00:14:44.265 "num_base_bdevs_operational": 3, 00:14:44.265 "base_bdevs_list": [ 00:14:44.265 { 00:14:44.265 "name": "pt1", 00:14:44.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:44.265 "is_configured": true, 00:14:44.265 "data_offset": 2048, 00:14:44.265 "data_size": 63488 00:14:44.265 }, 00:14:44.265 { 00:14:44.265 "name": null, 00:14:44.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.265 "is_configured": false, 00:14:44.265 "data_offset": 2048, 00:14:44.265 "data_size": 63488 00:14:44.265 }, 00:14:44.265 { 00:14:44.265 "name": null, 00:14:44.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.265 "is_configured": false, 00:14:44.265 "data_offset": 2048, 00:14:44.265 "data_size": 63488 00:14:44.265 } 00:14:44.265 ] 00:14:44.266 }' 00:14:44.266 13:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.266 13:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.833 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:14:44.833 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:44.833 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:44.833 [2024-07-25 13:14:55.299127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:44.833 [2024-07-25 13:14:55.299177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.833 [2024-07-25 13:14:55.299194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0b3a0 00:14:44.833 [2024-07-25 13:14:55.299206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.833 [2024-07-25 13:14:55.299522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.833 [2024-07-25 13:14:55.299538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:44.833 [2024-07-25 13:14:55.299596] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:44.833 [2024-07-25 13:14:55.299618] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:44.833 pt2 00:14:44.833 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:44.833 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:44.833 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:45.092 [2024-07-25 13:14:55.467566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:45.092 [2024-07-25 13:14:55.467600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.092 [2024-07-25 13:14:55.467614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0e1c0 00:14:45.092 [2024-07-25 13:14:55.467626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.092 [2024-07-25 13:14:55.467891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.092 [2024-07-25 13:14:55.467907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:45.092 [2024-07-25 13:14:55.467955] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:45.092 [2024-07-25 13:14:55.467970] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:45.092 [2024-07-25 13:14:55.468065] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c0f9f0 00:14:45.092 [2024-07-25 13:14:55.468075] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:45.092 [2024-07-25 13:14:55.468230] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c15240 00:14:45.092 [2024-07-25 13:14:55.468342] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c0f9f0 00:14:45.092 [2024-07-25 13:14:55.468351] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c0f9f0 00:14:45.092 [2024-07-25 13:14:55.468436] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.092 pt3 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.092 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.350 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.350 "name": "raid_bdev1", 00:14:45.350 "uuid": "b345b396-a1b6-446a-9ef5-f2c5358a3a73", 00:14:45.350 "strip_size_kb": 64, 00:14:45.350 "state": "online", 00:14:45.350 "raid_level": "raid0", 00:14:45.350 "superblock": true, 00:14:45.350 "num_base_bdevs": 3, 00:14:45.350 "num_base_bdevs_discovered": 3, 00:14:45.350 "num_base_bdevs_operational": 3, 00:14:45.350 "base_bdevs_list": [ 00:14:45.350 { 00:14:45.350 "name": "pt1", 00:14:45.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:45.350 "is_configured": true, 00:14:45.350 "data_offset": 2048, 00:14:45.350 "data_size": 63488 00:14:45.350 }, 00:14:45.350 { 00:14:45.350 "name": "pt2", 00:14:45.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.350 "is_configured": true, 00:14:45.350 "data_offset": 2048, 00:14:45.351 "data_size": 63488 00:14:45.351 }, 00:14:45.351 { 00:14:45.351 "name": "pt3", 00:14:45.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:45.351 "is_configured": true, 00:14:45.351 "data_offset": 2048, 00:14:45.351 "data_size": 63488 00:14:45.351 } 00:14:45.351 ] 00:14:45.351 }' 00:14:45.351 13:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.351 13:14:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.916 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:14:45.916 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:45.916 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:45.916 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:45.916 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:45.916 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:45.917 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:45.917 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:46.175 [2024-07-25 13:14:56.522611] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:46.175 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:46.175 "name": "raid_bdev1", 00:14:46.175 "aliases": [ 00:14:46.175 "b345b396-a1b6-446a-9ef5-f2c5358a3a73" 00:14:46.175 ], 00:14:46.175 "product_name": "Raid Volume", 00:14:46.175 "block_size": 512, 00:14:46.175 "num_blocks": 190464, 00:14:46.175 "uuid": "b345b396-a1b6-446a-9ef5-f2c5358a3a73", 00:14:46.175 "assigned_rate_limits": { 00:14:46.175 "rw_ios_per_sec": 0, 00:14:46.175 "rw_mbytes_per_sec": 0, 00:14:46.175 "r_mbytes_per_sec": 0, 00:14:46.175 "w_mbytes_per_sec": 0 00:14:46.175 }, 00:14:46.175 "claimed": false, 00:14:46.175 "zoned": false, 00:14:46.175 "supported_io_types": { 00:14:46.175 "read": true, 00:14:46.175 "write": true, 00:14:46.175 "unmap": true, 00:14:46.175 "flush": true, 00:14:46.175 "reset": true, 00:14:46.175 "nvme_admin": false, 00:14:46.175 "nvme_io": false, 00:14:46.175 "nvme_io_md": false, 00:14:46.175 "write_zeroes": true, 00:14:46.175 "zcopy": false, 00:14:46.175 "get_zone_info": false, 00:14:46.175 "zone_management": false, 00:14:46.175 "zone_append": false, 00:14:46.175 "compare": false, 00:14:46.175 "compare_and_write": false, 00:14:46.175 "abort": false, 00:14:46.175 "seek_hole": false, 00:14:46.175 "seek_data": false, 00:14:46.175 "copy": false, 00:14:46.175 "nvme_iov_md": false 00:14:46.175 }, 00:14:46.175 "memory_domains": [ 00:14:46.175 { 00:14:46.175 "dma_device_id": "system", 00:14:46.175 "dma_device_type": 1 00:14:46.175 }, 00:14:46.175 { 00:14:46.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.175 "dma_device_type": 2 00:14:46.175 }, 00:14:46.175 { 00:14:46.175 "dma_device_id": "system", 00:14:46.175 "dma_device_type": 1 00:14:46.175 }, 00:14:46.175 { 00:14:46.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.175 "dma_device_type": 2 00:14:46.175 }, 00:14:46.175 { 00:14:46.175 "dma_device_id": "system", 00:14:46.175 "dma_device_type": 1 00:14:46.175 }, 00:14:46.175 { 00:14:46.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.175 "dma_device_type": 2 00:14:46.175 } 00:14:46.175 ], 00:14:46.175 "driver_specific": { 00:14:46.175 "raid": { 00:14:46.175 "uuid": "b345b396-a1b6-446a-9ef5-f2c5358a3a73", 00:14:46.175 "strip_size_kb": 64, 00:14:46.175 "state": "online", 00:14:46.175 "raid_level": "raid0", 00:14:46.176 "superblock": true, 00:14:46.176 "num_base_bdevs": 3, 00:14:46.176 "num_base_bdevs_discovered": 3, 00:14:46.176 "num_base_bdevs_operational": 3, 00:14:46.176 "base_bdevs_list": [ 00:14:46.176 { 00:14:46.176 "name": "pt1", 00:14:46.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:46.176 "is_configured": true, 00:14:46.176 "data_offset": 2048, 00:14:46.176 "data_size": 63488 00:14:46.176 }, 00:14:46.176 { 00:14:46.176 "name": "pt2", 00:14:46.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:46.176 "is_configured": true, 00:14:46.176 "data_offset": 2048, 00:14:46.176 "data_size": 63488 00:14:46.176 }, 00:14:46.176 { 00:14:46.176 "name": "pt3", 00:14:46.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:46.176 "is_configured": true, 00:14:46.176 "data_offset": 2048, 00:14:46.176 "data_size": 63488 00:14:46.176 } 00:14:46.176 ] 00:14:46.176 } 00:14:46.176 } 00:14:46.176 }' 00:14:46.176 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:46.176 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:46.176 pt2 00:14:46.176 pt3' 00:14:46.176 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:46.176 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:46.176 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:46.435 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:46.435 "name": "pt1", 00:14:46.435 "aliases": [ 00:14:46.435 "00000000-0000-0000-0000-000000000001" 00:14:46.435 ], 00:14:46.435 "product_name": "passthru", 00:14:46.435 "block_size": 512, 00:14:46.435 "num_blocks": 65536, 00:14:46.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:46.435 "assigned_rate_limits": { 00:14:46.435 "rw_ios_per_sec": 0, 00:14:46.435 "rw_mbytes_per_sec": 0, 00:14:46.435 "r_mbytes_per_sec": 0, 00:14:46.435 "w_mbytes_per_sec": 0 00:14:46.435 }, 00:14:46.435 "claimed": true, 00:14:46.435 "claim_type": "exclusive_write", 00:14:46.435 "zoned": false, 00:14:46.435 "supported_io_types": { 00:14:46.435 "read": true, 00:14:46.435 "write": true, 00:14:46.435 "unmap": true, 00:14:46.435 "flush": true, 00:14:46.435 "reset": true, 00:14:46.435 "nvme_admin": false, 00:14:46.435 "nvme_io": false, 00:14:46.435 "nvme_io_md": false, 00:14:46.435 "write_zeroes": true, 00:14:46.435 "zcopy": true, 00:14:46.435 "get_zone_info": false, 00:14:46.435 "zone_management": false, 00:14:46.435 "zone_append": false, 00:14:46.435 "compare": false, 00:14:46.435 "compare_and_write": false, 00:14:46.435 "abort": true, 00:14:46.435 "seek_hole": false, 00:14:46.435 "seek_data": false, 00:14:46.435 "copy": true, 00:14:46.435 "nvme_iov_md": false 00:14:46.435 }, 00:14:46.435 "memory_domains": [ 00:14:46.435 { 00:14:46.435 "dma_device_id": "system", 00:14:46.435 "dma_device_type": 1 00:14:46.435 }, 00:14:46.435 { 00:14:46.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.435 "dma_device_type": 2 00:14:46.435 } 00:14:46.435 ], 00:14:46.435 "driver_specific": { 00:14:46.435 "passthru": { 00:14:46.435 "name": "pt1", 00:14:46.435 "base_bdev_name": "malloc1" 00:14:46.435 } 00:14:46.435 } 00:14:46.435 }' 00:14:46.435 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:46.435 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:46.435 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:46.435 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:46.694 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:46.694 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:46.694 13:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:46.694 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:46.694 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:46.694 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:46.694 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:46.694 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:46.694 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:46.694 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:46.694 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:46.952 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:46.952 "name": "pt2", 00:14:46.952 "aliases": [ 00:14:46.952 "00000000-0000-0000-0000-000000000002" 00:14:46.952 ], 00:14:46.952 "product_name": "passthru", 00:14:46.952 "block_size": 512, 00:14:46.952 "num_blocks": 65536, 00:14:46.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:46.952 "assigned_rate_limits": { 00:14:46.952 "rw_ios_per_sec": 0, 00:14:46.952 "rw_mbytes_per_sec": 0, 00:14:46.952 "r_mbytes_per_sec": 0, 00:14:46.952 "w_mbytes_per_sec": 0 00:14:46.952 }, 00:14:46.952 "claimed": true, 00:14:46.952 "claim_type": "exclusive_write", 00:14:46.952 "zoned": false, 00:14:46.952 "supported_io_types": { 00:14:46.952 "read": true, 00:14:46.952 "write": true, 00:14:46.952 "unmap": true, 00:14:46.952 "flush": true, 00:14:46.952 "reset": true, 00:14:46.952 "nvme_admin": false, 00:14:46.952 "nvme_io": false, 00:14:46.952 "nvme_io_md": false, 00:14:46.952 "write_zeroes": true, 00:14:46.952 "zcopy": true, 00:14:46.952 "get_zone_info": false, 00:14:46.952 "zone_management": false, 00:14:46.952 "zone_append": false, 00:14:46.952 "compare": false, 00:14:46.952 "compare_and_write": false, 00:14:46.952 "abort": true, 00:14:46.952 "seek_hole": false, 00:14:46.952 "seek_data": false, 00:14:46.952 "copy": true, 00:14:46.952 "nvme_iov_md": false 00:14:46.952 }, 00:14:46.952 "memory_domains": [ 00:14:46.952 { 00:14:46.952 "dma_device_id": "system", 00:14:46.952 "dma_device_type": 1 00:14:46.952 }, 00:14:46.952 { 00:14:46.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.952 "dma_device_type": 2 00:14:46.952 } 00:14:46.952 ], 00:14:46.952 "driver_specific": { 00:14:46.953 "passthru": { 00:14:46.953 "name": "pt2", 00:14:46.953 "base_bdev_name": "malloc2" 00:14:46.953 } 00:14:46.953 } 00:14:46.953 }' 00:14:46.953 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:46.953 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:47.210 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:47.211 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:47.211 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:47.211 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:47.211 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:47.211 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:47.211 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:47.211 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:47.211 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:47.469 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:47.469 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:47.469 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:47.469 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:47.469 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:47.469 "name": "pt3", 00:14:47.469 "aliases": [ 00:14:47.469 "00000000-0000-0000-0000-000000000003" 00:14:47.469 ], 00:14:47.469 "product_name": "passthru", 00:14:47.469 "block_size": 512, 00:14:47.469 "num_blocks": 65536, 00:14:47.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:47.469 "assigned_rate_limits": { 00:14:47.469 "rw_ios_per_sec": 0, 00:14:47.469 "rw_mbytes_per_sec": 0, 00:14:47.469 "r_mbytes_per_sec": 0, 00:14:47.469 "w_mbytes_per_sec": 0 00:14:47.469 }, 00:14:47.469 "claimed": true, 00:14:47.469 "claim_type": "exclusive_write", 00:14:47.469 "zoned": false, 00:14:47.469 "supported_io_types": { 00:14:47.469 "read": true, 00:14:47.469 "write": true, 00:14:47.469 "unmap": true, 00:14:47.469 "flush": true, 00:14:47.469 "reset": true, 00:14:47.469 "nvme_admin": false, 00:14:47.469 "nvme_io": false, 00:14:47.469 "nvme_io_md": false, 00:14:47.469 "write_zeroes": true, 00:14:47.469 "zcopy": true, 00:14:47.469 "get_zone_info": false, 00:14:47.469 "zone_management": false, 00:14:47.469 "zone_append": false, 00:14:47.469 "compare": false, 00:14:47.469 "compare_and_write": false, 00:14:47.469 "abort": true, 00:14:47.469 "seek_hole": false, 00:14:47.469 "seek_data": false, 00:14:47.469 "copy": true, 00:14:47.469 "nvme_iov_md": false 00:14:47.469 }, 00:14:47.469 "memory_domains": [ 00:14:47.469 { 00:14:47.469 "dma_device_id": "system", 00:14:47.469 "dma_device_type": 1 00:14:47.469 }, 00:14:47.469 { 00:14:47.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.469 "dma_device_type": 2 00:14:47.469 } 00:14:47.469 ], 00:14:47.469 "driver_specific": { 00:14:47.469 "passthru": { 00:14:47.469 "name": "pt3", 00:14:47.469 "base_bdev_name": "malloc3" 00:14:47.469 } 00:14:47.469 } 00:14:47.469 }' 00:14:47.469 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:47.730 13:14:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:47.730 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:47.730 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:47.730 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:47.730 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:47.730 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:47.730 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:47.730 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:47.730 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.012 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.012 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:48.012 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:48.012 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:14:48.282 [2024-07-25 13:14:58.499857] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' b345b396-a1b6-446a-9ef5-f2c5358a3a73 '!=' b345b396-a1b6-446a-9ef5-f2c5358a3a73 ']' 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 861559 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 861559 ']' 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 861559 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 861559 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 861559' 00:14:48.282 killing process with pid 861559 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 861559 00:14:48.282 [2024-07-25 13:14:58.588067] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.282 [2024-07-25 13:14:58.588116] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.282 [2024-07-25 13:14:58.588168] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.282 [2024-07-25 13:14:58.588179] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c0f9f0 name raid_bdev1, state offline 00:14:48.282 13:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 861559 00:14:48.282 [2024-07-25 13:14:58.611366] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.541 13:14:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:14:48.541 00:14:48.541 real 0m14.492s 00:14:48.541 user 0m26.126s 00:14:48.541 sys 0m2.590s 00:14:48.541 13:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.541 13:14:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.541 ************************************ 00:14:48.541 END TEST raid_superblock_test 00:14:48.541 ************************************ 00:14:48.541 13:14:58 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:14:48.541 13:14:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:48.541 13:14:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.541 13:14:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:48.541 ************************************ 00:14:48.541 START TEST raid_read_error_test 00:14:48.541 ************************************ 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:48.541 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:48.542 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.6tefr39mdD 00:14:48.542 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=864236 00:14:48.542 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 864236 /var/tmp/spdk-raid.sock 00:14:48.542 13:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:48.542 13:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 864236 ']' 00:14:48.542 13:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:48.542 13:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.542 13:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:48.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:48.542 13:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.542 13:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.542 [2024-07-25 13:14:58.962567] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:14:48.542 [2024-07-25 13:14:58.962624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864236 ] 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:48.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:48.801 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:48.801 [2024-07-25 13:14:59.092877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.801 [2024-07-25 13:14:59.178803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.801 [2024-07-25 13:14:59.242961] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.801 [2024-07-25 13:14:59.242995] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.737 13:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.737 13:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:49.737 13:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:49.737 13:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:49.737 BaseBdev1_malloc 00:14:49.737 13:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:49.996 true 00:14:49.996 13:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:50.254 [2024-07-25 13:15:00.524214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:50.254 [2024-07-25 13:15:00.524253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.254 [2024-07-25 13:15:00.524271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20ab1d0 00:14:50.254 [2024-07-25 13:15:00.524283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.254 [2024-07-25 13:15:00.525854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.254 [2024-07-25 13:15:00.525882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:50.254 BaseBdev1 00:14:50.254 13:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:50.254 13:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:50.513 BaseBdev2_malloc 00:14:50.513 13:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:50.513 true 00:14:50.772 13:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:50.772 [2024-07-25 13:15:01.214239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:50.772 [2024-07-25 13:15:01.214281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.772 [2024-07-25 13:15:01.214299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20ae710 00:14:50.772 [2024-07-25 13:15:01.214311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.772 [2024-07-25 13:15:01.215748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.772 [2024-07-25 13:15:01.215775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:50.772 BaseBdev2 00:14:50.772 13:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:50.772 13:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:51.030 BaseBdev3_malloc 00:14:51.030 13:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:51.289 true 00:14:51.289 13:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:51.547 [2024-07-25 13:15:01.884314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:51.547 [2024-07-25 13:15:01.884355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.547 [2024-07-25 13:15:01.884374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20b0de0 00:14:51.547 [2024-07-25 13:15:01.884386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.547 [2024-07-25 13:15:01.885765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.547 [2024-07-25 13:15:01.885793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:51.547 BaseBdev3 00:14:51.547 13:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:51.806 [2024-07-25 13:15:02.108929] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.806 [2024-07-25 13:15:02.110107] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.806 [2024-07-25 13:15:02.110184] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:51.806 [2024-07-25 13:15:02.110362] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x20b2780 00:14:51.806 [2024-07-25 13:15:02.110373] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:51.806 [2024-07-25 13:15:02.110555] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20b7140 00:14:51.806 [2024-07-25 13:15:02.110685] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20b2780 00:14:51.806 [2024-07-25 13:15:02.110695] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20b2780 00:14:51.806 [2024-07-25 13:15:02.110804] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.806 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:51.806 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:51.806 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:51.806 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:51.806 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.806 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:51.806 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.806 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.806 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.806 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.806 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.806 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.064 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.064 "name": "raid_bdev1", 00:14:52.064 "uuid": "cd14fe70-0ac0-431e-9df7-cc5bece13466", 00:14:52.064 "strip_size_kb": 64, 00:14:52.064 "state": "online", 00:14:52.064 "raid_level": "raid0", 00:14:52.064 "superblock": true, 00:14:52.064 "num_base_bdevs": 3, 00:14:52.064 "num_base_bdevs_discovered": 3, 00:14:52.064 "num_base_bdevs_operational": 3, 00:14:52.064 "base_bdevs_list": [ 00:14:52.064 { 00:14:52.064 "name": "BaseBdev1", 00:14:52.064 "uuid": "affcc019-bf38-56de-a300-a1866b795356", 00:14:52.064 "is_configured": true, 00:14:52.064 "data_offset": 2048, 00:14:52.064 "data_size": 63488 00:14:52.064 }, 00:14:52.064 { 00:14:52.064 "name": "BaseBdev2", 00:14:52.064 "uuid": "1171f74a-a697-55aa-8df2-b29ed4df6ca7", 00:14:52.064 "is_configured": true, 00:14:52.064 "data_offset": 2048, 00:14:52.064 "data_size": 63488 00:14:52.064 }, 00:14:52.064 { 00:14:52.064 "name": "BaseBdev3", 00:14:52.064 "uuid": "46e00491-1ba9-55d6-ac5e-0f60591e90ae", 00:14:52.064 "is_configured": true, 00:14:52.064 "data_offset": 2048, 00:14:52.064 "data_size": 63488 00:14:52.064 } 00:14:52.064 ] 00:14:52.064 }' 00:14:52.064 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.064 13:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.629 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:52.629 13:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:52.629 [2024-07-25 13:15:03.007528] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20b3ab0 00:14:53.562 13:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.819 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.077 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.077 "name": "raid_bdev1", 00:14:54.077 "uuid": "cd14fe70-0ac0-431e-9df7-cc5bece13466", 00:14:54.077 "strip_size_kb": 64, 00:14:54.077 "state": "online", 00:14:54.077 "raid_level": "raid0", 00:14:54.077 "superblock": true, 00:14:54.077 "num_base_bdevs": 3, 00:14:54.077 "num_base_bdevs_discovered": 3, 00:14:54.077 "num_base_bdevs_operational": 3, 00:14:54.077 "base_bdevs_list": [ 00:14:54.077 { 00:14:54.077 "name": "BaseBdev1", 00:14:54.077 "uuid": "affcc019-bf38-56de-a300-a1866b795356", 00:14:54.077 "is_configured": true, 00:14:54.077 "data_offset": 2048, 00:14:54.077 "data_size": 63488 00:14:54.077 }, 00:14:54.077 { 00:14:54.077 "name": "BaseBdev2", 00:14:54.077 "uuid": "1171f74a-a697-55aa-8df2-b29ed4df6ca7", 00:14:54.077 "is_configured": true, 00:14:54.077 "data_offset": 2048, 00:14:54.077 "data_size": 63488 00:14:54.077 }, 00:14:54.077 { 00:14:54.077 "name": "BaseBdev3", 00:14:54.077 "uuid": "46e00491-1ba9-55d6-ac5e-0f60591e90ae", 00:14:54.077 "is_configured": true, 00:14:54.077 "data_offset": 2048, 00:14:54.077 "data_size": 63488 00:14:54.077 } 00:14:54.077 ] 00:14:54.077 }' 00:14:54.077 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.077 13:15:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.643 13:15:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:54.901 [2024-07-25 13:15:05.187070] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.902 [2024-07-25 13:15:05.187101] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.902 [2024-07-25 13:15:05.190053] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.902 [2024-07-25 13:15:05.190087] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.902 [2024-07-25 13:15:05.190116] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.902 [2024-07-25 13:15:05.190126] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20b2780 name raid_bdev1, state offline 00:14:54.902 0 00:14:54.902 13:15:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 864236 00:14:54.902 13:15:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 864236 ']' 00:14:54.902 13:15:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 864236 00:14:54.902 13:15:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:54.902 13:15:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:54.902 13:15:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 864236 00:14:54.902 13:15:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:54.902 13:15:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:54.902 13:15:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 864236' 00:14:54.902 killing process with pid 864236 00:14:54.902 13:15:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 864236 00:14:54.902 [2024-07-25 13:15:05.264422] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.902 13:15:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 864236 00:14:54.902 [2024-07-25 13:15:05.282176] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.160 13:15:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.6tefr39mdD 00:14:55.160 13:15:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:55.160 13:15:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:55.160 13:15:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:14:55.160 13:15:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:14:55.160 13:15:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:55.160 13:15:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:55.160 13:15:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:14:55.160 00:14:55.160 real 0m6.600s 00:14:55.160 user 0m10.426s 00:14:55.160 sys 0m1.103s 00:14:55.160 13:15:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.160 13:15:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.160 ************************************ 00:14:55.160 END TEST raid_read_error_test 00:14:55.160 ************************************ 00:14:55.160 13:15:05 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:14:55.160 13:15:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:55.160 13:15:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.160 13:15:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.160 ************************************ 00:14:55.160 START TEST raid_write_error_test 00:14:55.160 ************************************ 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:55.160 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:55.161 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:55.161 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.FHkVSd6HUD 00:14:55.161 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=865952 00:14:55.161 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 865952 /var/tmp/spdk-raid.sock 00:14:55.161 13:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:55.161 13:15:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 865952 ']' 00:14:55.161 13:15:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:55.161 13:15:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.161 13:15:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:55.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:55.161 13:15:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.161 13:15:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.161 [2024-07-25 13:15:05.640890] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:14:55.161 [2024-07-25 13:15:05.640952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865952 ] 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:55.423 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:55.423 [2024-07-25 13:15:05.772087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.423 [2024-07-25 13:15:05.859902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.680 [2024-07-25 13:15:05.926106] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.680 [2024-07-25 13:15:05.926137] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.244 13:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:56.244 13:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:56.244 13:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:56.244 13:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:56.501 BaseBdev1_malloc 00:14:56.501 13:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:56.759 true 00:14:56.759 13:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:56.759 [2024-07-25 13:15:07.214902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:56.759 [2024-07-25 13:15:07.214942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.759 [2024-07-25 13:15:07.214959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9e81d0 00:14:56.759 [2024-07-25 13:15:07.214970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.759 [2024-07-25 13:15:07.216445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.759 [2024-07-25 13:15:07.216471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:56.759 BaseBdev1 00:14:56.759 13:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:56.759 13:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:57.016 BaseBdev2_malloc 00:14:57.016 13:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:57.274 true 00:14:57.274 13:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:57.533 [2024-07-25 13:15:07.852706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:57.533 [2024-07-25 13:15:07.852745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.533 [2024-07-25 13:15:07.852762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9eb710 00:14:57.533 [2024-07-25 13:15:07.852774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.533 [2024-07-25 13:15:07.854033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.533 [2024-07-25 13:15:07.854058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:57.533 BaseBdev2 00:14:57.533 13:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:57.533 13:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:57.791 BaseBdev3_malloc 00:14:57.791 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:58.049 true 00:14:58.049 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:58.049 [2024-07-25 13:15:08.534487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:58.049 [2024-07-25 13:15:08.534521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.049 [2024-07-25 13:15:08.534540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9edde0 00:14:58.049 [2024-07-25 13:15:08.534551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.049 [2024-07-25 13:15:08.535808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.049 [2024-07-25 13:15:08.535833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:58.307 BaseBdev3 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:58.307 [2024-07-25 13:15:08.759117] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.307 [2024-07-25 13:15:08.760191] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.307 [2024-07-25 13:15:08.760250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.307 [2024-07-25 13:15:08.760419] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x9ef780 00:14:58.307 [2024-07-25 13:15:08.760430] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:58.307 [2024-07-25 13:15:08.760591] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9f4140 00:14:58.307 [2024-07-25 13:15:08.760714] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9ef780 00:14:58.307 [2024-07-25 13:15:08.760724] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9ef780 00:14:58.307 [2024-07-25 13:15:08.760822] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.307 13:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.565 13:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:58.565 "name": "raid_bdev1", 00:14:58.565 "uuid": "9c92d400-72a5-44be-9fbd-e64dface6711", 00:14:58.565 "strip_size_kb": 64, 00:14:58.565 "state": "online", 00:14:58.565 "raid_level": "raid0", 00:14:58.565 "superblock": true, 00:14:58.565 "num_base_bdevs": 3, 00:14:58.565 "num_base_bdevs_discovered": 3, 00:14:58.565 "num_base_bdevs_operational": 3, 00:14:58.565 "base_bdevs_list": [ 00:14:58.565 { 00:14:58.565 "name": "BaseBdev1", 00:14:58.565 "uuid": "9e634229-a5c0-502c-ae0b-62ee2a793024", 00:14:58.565 "is_configured": true, 00:14:58.565 "data_offset": 2048, 00:14:58.565 "data_size": 63488 00:14:58.565 }, 00:14:58.565 { 00:14:58.565 "name": "BaseBdev2", 00:14:58.565 "uuid": "45ed6d99-3e77-51db-8dd9-baa315a0d915", 00:14:58.565 "is_configured": true, 00:14:58.565 "data_offset": 2048, 00:14:58.565 "data_size": 63488 00:14:58.565 }, 00:14:58.565 { 00:14:58.565 "name": "BaseBdev3", 00:14:58.565 "uuid": "aa2ac741-bb72-5b19-bf88-e9d0a6204508", 00:14:58.565 "is_configured": true, 00:14:58.565 "data_offset": 2048, 00:14:58.565 "data_size": 63488 00:14:58.565 } 00:14:58.565 ] 00:14:58.565 }' 00:14:58.565 13:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:58.565 13:15:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.131 13:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:59.131 13:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:59.390 [2024-07-25 13:15:09.673773] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9f0ab0 00:15:00.324 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.583 13:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.583 13:15:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.583 "name": "raid_bdev1", 00:15:00.583 "uuid": "9c92d400-72a5-44be-9fbd-e64dface6711", 00:15:00.583 "strip_size_kb": 64, 00:15:00.583 "state": "online", 00:15:00.583 "raid_level": "raid0", 00:15:00.583 "superblock": true, 00:15:00.583 "num_base_bdevs": 3, 00:15:00.583 "num_base_bdevs_discovered": 3, 00:15:00.583 "num_base_bdevs_operational": 3, 00:15:00.583 "base_bdevs_list": [ 00:15:00.583 { 00:15:00.583 "name": "BaseBdev1", 00:15:00.583 "uuid": "9e634229-a5c0-502c-ae0b-62ee2a793024", 00:15:00.583 "is_configured": true, 00:15:00.583 "data_offset": 2048, 00:15:00.583 "data_size": 63488 00:15:00.583 }, 00:15:00.583 { 00:15:00.583 "name": "BaseBdev2", 00:15:00.583 "uuid": "45ed6d99-3e77-51db-8dd9-baa315a0d915", 00:15:00.583 "is_configured": true, 00:15:00.583 "data_offset": 2048, 00:15:00.583 "data_size": 63488 00:15:00.583 }, 00:15:00.583 { 00:15:00.583 "name": "BaseBdev3", 00:15:00.583 "uuid": "aa2ac741-bb72-5b19-bf88-e9d0a6204508", 00:15:00.583 "is_configured": true, 00:15:00.583 "data_offset": 2048, 00:15:00.583 "data_size": 63488 00:15:00.583 } 00:15:00.583 ] 00:15:00.583 }' 00:15:00.583 13:15:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.583 13:15:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.149 13:15:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:01.407 [2024-07-25 13:15:11.828994] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.407 [2024-07-25 13:15:11.829029] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.407 [2024-07-25 13:15:11.831996] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.407 [2024-07-25 13:15:11.832030] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.407 [2024-07-25 13:15:11.832060] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.407 [2024-07-25 13:15:11.832070] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9ef780 name raid_bdev1, state offline 00:15:01.407 0 00:15:01.407 13:15:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 865952 00:15:01.407 13:15:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 865952 ']' 00:15:01.407 13:15:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 865952 00:15:01.407 13:15:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:15:01.407 13:15:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.407 13:15:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 865952 00:15:01.666 13:15:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:01.666 13:15:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:01.666 13:15:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 865952' 00:15:01.666 killing process with pid 865952 00:15:01.666 13:15:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 865952 00:15:01.666 [2024-07-25 13:15:11.907210] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.666 13:15:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 865952 00:15:01.666 [2024-07-25 13:15:11.926385] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.666 13:15:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:01.666 13:15:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.FHkVSd6HUD 00:15:01.666 13:15:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:01.666 13:15:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:15:01.666 13:15:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:15:01.666 13:15:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:01.666 13:15:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:01.666 13:15:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:15:01.666 00:15:01.666 real 0m6.566s 00:15:01.666 user 0m10.309s 00:15:01.666 sys 0m1.144s 00:15:01.666 13:15:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.666 13:15:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.666 ************************************ 00:15:01.666 END TEST raid_write_error_test 00:15:01.666 ************************************ 00:15:01.925 13:15:12 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:15:01.925 13:15:12 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:15:01.925 13:15:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:01.925 13:15:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.925 13:15:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.925 ************************************ 00:15:01.925 START TEST raid_state_function_test 00:15:01.925 ************************************ 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=867347 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 867347' 00:15:01.925 Process raid pid: 867347 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 867347 /var/tmp/spdk-raid.sock 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 867347 ']' 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:01.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.925 13:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:01.926 [2024-07-25 13:15:12.281796] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:15:01.926 [2024-07-25 13:15:12.281853] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:01.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:01.926 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:01.926 [2024-07-25 13:15:12.412256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.202 [2024-07-25 13:15:12.498269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.202 [2024-07-25 13:15:12.558190] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.202 [2024-07-25 13:15:12.558225] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.799 13:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.799 13:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:02.799 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:03.058 [2024-07-25 13:15:13.312217] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:03.059 [2024-07-25 13:15:13.312257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:03.059 [2024-07-25 13:15:13.312267] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:03.059 [2024-07-25 13:15:13.312277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:03.059 [2024-07-25 13:15:13.312285] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:03.059 [2024-07-25 13:15:13.312295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:03.059 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:03.059 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:03.059 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:03.059 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:03.059 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.059 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:03.059 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.059 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.059 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.059 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.059 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.059 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.629 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.629 "name": "Existed_Raid", 00:15:03.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.629 "strip_size_kb": 64, 00:15:03.629 "state": "configuring", 00:15:03.629 "raid_level": "concat", 00:15:03.629 "superblock": false, 00:15:03.629 "num_base_bdevs": 3, 00:15:03.629 "num_base_bdevs_discovered": 0, 00:15:03.629 "num_base_bdevs_operational": 3, 00:15:03.629 "base_bdevs_list": [ 00:15:03.629 { 00:15:03.629 "name": "BaseBdev1", 00:15:03.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.629 "is_configured": false, 00:15:03.629 "data_offset": 0, 00:15:03.629 "data_size": 0 00:15:03.629 }, 00:15:03.629 { 00:15:03.629 "name": "BaseBdev2", 00:15:03.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.629 "is_configured": false, 00:15:03.629 "data_offset": 0, 00:15:03.629 "data_size": 0 00:15:03.629 }, 00:15:03.629 { 00:15:03.629 "name": "BaseBdev3", 00:15:03.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.629 "is_configured": false, 00:15:03.629 "data_offset": 0, 00:15:03.629 "data_size": 0 00:15:03.629 } 00:15:03.629 ] 00:15:03.629 }' 00:15:03.629 13:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.629 13:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.888 13:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:04.147 [2024-07-25 13:15:14.567371] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.147 [2024-07-25 13:15:14.567400] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x128df40 name Existed_Raid, state configuring 00:15:04.147 13:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:04.716 [2024-07-25 13:15:15.064672] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.716 [2024-07-25 13:15:15.064701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.716 [2024-07-25 13:15:15.064710] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.716 [2024-07-25 13:15:15.064720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.716 [2024-07-25 13:15:15.064728] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:04.716 [2024-07-25 13:15:15.064738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:04.716 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:04.975 [2024-07-25 13:15:15.302697] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.975 BaseBdev1 00:15:04.975 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:04.975 13:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:04.975 13:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:04.975 13:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:04.975 13:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:04.975 13:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:04.975 13:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.235 [ 00:15:05.235 { 00:15:05.235 "name": "BaseBdev1", 00:15:05.235 "aliases": [ 00:15:05.235 "3350f1b2-e1c6-4f8a-94d0-a9e13fcd9210" 00:15:05.235 ], 00:15:05.235 "product_name": "Malloc disk", 00:15:05.235 "block_size": 512, 00:15:05.235 "num_blocks": 65536, 00:15:05.235 "uuid": "3350f1b2-e1c6-4f8a-94d0-a9e13fcd9210", 00:15:05.235 "assigned_rate_limits": { 00:15:05.235 "rw_ios_per_sec": 0, 00:15:05.235 "rw_mbytes_per_sec": 0, 00:15:05.235 "r_mbytes_per_sec": 0, 00:15:05.235 "w_mbytes_per_sec": 0 00:15:05.235 }, 00:15:05.235 "claimed": true, 00:15:05.235 "claim_type": "exclusive_write", 00:15:05.235 "zoned": false, 00:15:05.235 "supported_io_types": { 00:15:05.235 "read": true, 00:15:05.235 "write": true, 00:15:05.235 "unmap": true, 00:15:05.235 "flush": true, 00:15:05.235 "reset": true, 00:15:05.235 "nvme_admin": false, 00:15:05.235 "nvme_io": false, 00:15:05.235 "nvme_io_md": false, 00:15:05.235 "write_zeroes": true, 00:15:05.235 "zcopy": true, 00:15:05.235 "get_zone_info": false, 00:15:05.235 "zone_management": false, 00:15:05.235 "zone_append": false, 00:15:05.235 "compare": false, 00:15:05.235 "compare_and_write": false, 00:15:05.235 "abort": true, 00:15:05.235 "seek_hole": false, 00:15:05.235 "seek_data": false, 00:15:05.235 "copy": true, 00:15:05.235 "nvme_iov_md": false 00:15:05.235 }, 00:15:05.235 "memory_domains": [ 00:15:05.235 { 00:15:05.235 "dma_device_id": "system", 00:15:05.235 "dma_device_type": 1 00:15:05.235 }, 00:15:05.235 { 00:15:05.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.235 "dma_device_type": 2 00:15:05.235 } 00:15:05.235 ], 00:15:05.235 "driver_specific": {} 00:15:05.235 } 00:15:05.235 ] 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.235 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.494 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.494 "name": "Existed_Raid", 00:15:05.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.494 "strip_size_kb": 64, 00:15:05.494 "state": "configuring", 00:15:05.494 "raid_level": "concat", 00:15:05.494 "superblock": false, 00:15:05.494 "num_base_bdevs": 3, 00:15:05.494 "num_base_bdevs_discovered": 1, 00:15:05.494 "num_base_bdevs_operational": 3, 00:15:05.494 "base_bdevs_list": [ 00:15:05.494 { 00:15:05.494 "name": "BaseBdev1", 00:15:05.494 "uuid": "3350f1b2-e1c6-4f8a-94d0-a9e13fcd9210", 00:15:05.494 "is_configured": true, 00:15:05.494 "data_offset": 0, 00:15:05.494 "data_size": 65536 00:15:05.494 }, 00:15:05.494 { 00:15:05.494 "name": "BaseBdev2", 00:15:05.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.494 "is_configured": false, 00:15:05.494 "data_offset": 0, 00:15:05.494 "data_size": 0 00:15:05.494 }, 00:15:05.494 { 00:15:05.494 "name": "BaseBdev3", 00:15:05.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.494 "is_configured": false, 00:15:05.494 "data_offset": 0, 00:15:05.494 "data_size": 0 00:15:05.494 } 00:15:05.494 ] 00:15:05.494 }' 00:15:05.494 13:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.494 13:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.063 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:06.322 [2024-07-25 13:15:16.618188] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.322 [2024-07-25 13:15:16.618223] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x128d810 name Existed_Raid, state configuring 00:15:06.322 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:06.582 [2024-07-25 13:15:16.846820] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.582 [2024-07-25 13:15:16.848214] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.582 [2024-07-25 13:15:16.848245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.582 [2024-07-25 13:15:16.848254] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:06.582 [2024-07-25 13:15:16.848265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.582 13:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.582 13:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:06.582 "name": "Existed_Raid", 00:15:06.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.582 "strip_size_kb": 64, 00:15:06.582 "state": "configuring", 00:15:06.582 "raid_level": "concat", 00:15:06.582 "superblock": false, 00:15:06.582 "num_base_bdevs": 3, 00:15:06.582 "num_base_bdevs_discovered": 1, 00:15:06.582 "num_base_bdevs_operational": 3, 00:15:06.582 "base_bdevs_list": [ 00:15:06.582 { 00:15:06.582 "name": "BaseBdev1", 00:15:06.582 "uuid": "3350f1b2-e1c6-4f8a-94d0-a9e13fcd9210", 00:15:06.582 "is_configured": true, 00:15:06.582 "data_offset": 0, 00:15:06.582 "data_size": 65536 00:15:06.582 }, 00:15:06.582 { 00:15:06.582 "name": "BaseBdev2", 00:15:06.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.582 "is_configured": false, 00:15:06.582 "data_offset": 0, 00:15:06.582 "data_size": 0 00:15:06.582 }, 00:15:06.582 { 00:15:06.582 "name": "BaseBdev3", 00:15:06.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.582 "is_configured": false, 00:15:06.582 "data_offset": 0, 00:15:06.582 "data_size": 0 00:15:06.582 } 00:15:06.582 ] 00:15:06.582 }' 00:15:06.582 13:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:06.582 13:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.151 13:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:07.410 [2024-07-25 13:15:17.820510] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.410 BaseBdev2 00:15:07.410 13:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:07.410 13:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:07.410 13:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:07.410 13:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:07.410 13:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:07.410 13:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:07.410 13:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:07.671 13:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:07.930 [ 00:15:07.930 { 00:15:07.930 "name": "BaseBdev2", 00:15:07.930 "aliases": [ 00:15:07.930 "6b839a94-21bc-4f7e-90dd-2c80dca4b593" 00:15:07.930 ], 00:15:07.930 "product_name": "Malloc disk", 00:15:07.930 "block_size": 512, 00:15:07.930 "num_blocks": 65536, 00:15:07.930 "uuid": "6b839a94-21bc-4f7e-90dd-2c80dca4b593", 00:15:07.930 "assigned_rate_limits": { 00:15:07.930 "rw_ios_per_sec": 0, 00:15:07.930 "rw_mbytes_per_sec": 0, 00:15:07.930 "r_mbytes_per_sec": 0, 00:15:07.930 "w_mbytes_per_sec": 0 00:15:07.930 }, 00:15:07.930 "claimed": true, 00:15:07.930 "claim_type": "exclusive_write", 00:15:07.930 "zoned": false, 00:15:07.930 "supported_io_types": { 00:15:07.930 "read": true, 00:15:07.930 "write": true, 00:15:07.930 "unmap": true, 00:15:07.930 "flush": true, 00:15:07.930 "reset": true, 00:15:07.930 "nvme_admin": false, 00:15:07.931 "nvme_io": false, 00:15:07.931 "nvme_io_md": false, 00:15:07.931 "write_zeroes": true, 00:15:07.931 "zcopy": true, 00:15:07.931 "get_zone_info": false, 00:15:07.931 "zone_management": false, 00:15:07.931 "zone_append": false, 00:15:07.931 "compare": false, 00:15:07.931 "compare_and_write": false, 00:15:07.931 "abort": true, 00:15:07.931 "seek_hole": false, 00:15:07.931 "seek_data": false, 00:15:07.931 "copy": true, 00:15:07.931 "nvme_iov_md": false 00:15:07.931 }, 00:15:07.931 "memory_domains": [ 00:15:07.931 { 00:15:07.931 "dma_device_id": "system", 00:15:07.931 "dma_device_type": 1 00:15:07.931 }, 00:15:07.931 { 00:15:07.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.931 "dma_device_type": 2 00:15:07.931 } 00:15:07.931 ], 00:15:07.931 "driver_specific": {} 00:15:07.931 } 00:15:07.931 ] 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.931 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.190 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:08.190 "name": "Existed_Raid", 00:15:08.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.190 "strip_size_kb": 64, 00:15:08.190 "state": "configuring", 00:15:08.190 "raid_level": "concat", 00:15:08.190 "superblock": false, 00:15:08.190 "num_base_bdevs": 3, 00:15:08.190 "num_base_bdevs_discovered": 2, 00:15:08.190 "num_base_bdevs_operational": 3, 00:15:08.190 "base_bdevs_list": [ 00:15:08.190 { 00:15:08.190 "name": "BaseBdev1", 00:15:08.190 "uuid": "3350f1b2-e1c6-4f8a-94d0-a9e13fcd9210", 00:15:08.190 "is_configured": true, 00:15:08.190 "data_offset": 0, 00:15:08.190 "data_size": 65536 00:15:08.190 }, 00:15:08.190 { 00:15:08.190 "name": "BaseBdev2", 00:15:08.190 "uuid": "6b839a94-21bc-4f7e-90dd-2c80dca4b593", 00:15:08.190 "is_configured": true, 00:15:08.190 "data_offset": 0, 00:15:08.190 "data_size": 65536 00:15:08.190 }, 00:15:08.190 { 00:15:08.190 "name": "BaseBdev3", 00:15:08.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.190 "is_configured": false, 00:15:08.190 "data_offset": 0, 00:15:08.190 "data_size": 0 00:15:08.190 } 00:15:08.190 ] 00:15:08.190 }' 00:15:08.190 13:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:08.190 13:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.759 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:09.022 [2024-07-25 13:15:19.291637] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.022 [2024-07-25 13:15:19.291670] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x128e710 00:15:09.022 [2024-07-25 13:15:19.291682] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:09.022 [2024-07-25 13:15:19.291858] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12862e0 00:15:09.022 [2024-07-25 13:15:19.291966] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x128e710 00:15:09.022 [2024-07-25 13:15:19.291975] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x128e710 00:15:09.022 [2024-07-25 13:15:19.292122] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.022 BaseBdev3 00:15:09.022 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:09.022 13:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:09.022 13:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:09.022 13:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:09.022 13:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:09.022 13:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:09.022 13:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:09.283 [ 00:15:09.283 { 00:15:09.283 "name": "BaseBdev3", 00:15:09.283 "aliases": [ 00:15:09.283 "3f2b1cdf-2d31-409d-91c2-4183568773f7" 00:15:09.283 ], 00:15:09.283 "product_name": "Malloc disk", 00:15:09.283 "block_size": 512, 00:15:09.283 "num_blocks": 65536, 00:15:09.283 "uuid": "3f2b1cdf-2d31-409d-91c2-4183568773f7", 00:15:09.283 "assigned_rate_limits": { 00:15:09.283 "rw_ios_per_sec": 0, 00:15:09.283 "rw_mbytes_per_sec": 0, 00:15:09.283 "r_mbytes_per_sec": 0, 00:15:09.283 "w_mbytes_per_sec": 0 00:15:09.283 }, 00:15:09.283 "claimed": true, 00:15:09.283 "claim_type": "exclusive_write", 00:15:09.283 "zoned": false, 00:15:09.283 "supported_io_types": { 00:15:09.283 "read": true, 00:15:09.283 "write": true, 00:15:09.283 "unmap": true, 00:15:09.283 "flush": true, 00:15:09.283 "reset": true, 00:15:09.283 "nvme_admin": false, 00:15:09.283 "nvme_io": false, 00:15:09.283 "nvme_io_md": false, 00:15:09.283 "write_zeroes": true, 00:15:09.283 "zcopy": true, 00:15:09.283 "get_zone_info": false, 00:15:09.283 "zone_management": false, 00:15:09.283 "zone_append": false, 00:15:09.283 "compare": false, 00:15:09.283 "compare_and_write": false, 00:15:09.283 "abort": true, 00:15:09.283 "seek_hole": false, 00:15:09.283 "seek_data": false, 00:15:09.283 "copy": true, 00:15:09.283 "nvme_iov_md": false 00:15:09.283 }, 00:15:09.283 "memory_domains": [ 00:15:09.283 { 00:15:09.283 "dma_device_id": "system", 00:15:09.283 "dma_device_type": 1 00:15:09.283 }, 00:15:09.283 { 00:15:09.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.283 "dma_device_type": 2 00:15:09.283 } 00:15:09.283 ], 00:15:09.283 "driver_specific": {} 00:15:09.283 } 00:15:09.283 ] 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.283 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.544 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.544 "name": "Existed_Raid", 00:15:09.544 "uuid": "bd5f0a1e-8d48-4743-a84f-221c50f6480d", 00:15:09.544 "strip_size_kb": 64, 00:15:09.544 "state": "online", 00:15:09.544 "raid_level": "concat", 00:15:09.544 "superblock": false, 00:15:09.544 "num_base_bdevs": 3, 00:15:09.544 "num_base_bdevs_discovered": 3, 00:15:09.544 "num_base_bdevs_operational": 3, 00:15:09.544 "base_bdevs_list": [ 00:15:09.544 { 00:15:09.544 "name": "BaseBdev1", 00:15:09.544 "uuid": "3350f1b2-e1c6-4f8a-94d0-a9e13fcd9210", 00:15:09.544 "is_configured": true, 00:15:09.544 "data_offset": 0, 00:15:09.544 "data_size": 65536 00:15:09.544 }, 00:15:09.544 { 00:15:09.544 "name": "BaseBdev2", 00:15:09.544 "uuid": "6b839a94-21bc-4f7e-90dd-2c80dca4b593", 00:15:09.544 "is_configured": true, 00:15:09.544 "data_offset": 0, 00:15:09.544 "data_size": 65536 00:15:09.544 }, 00:15:09.544 { 00:15:09.544 "name": "BaseBdev3", 00:15:09.544 "uuid": "3f2b1cdf-2d31-409d-91c2-4183568773f7", 00:15:09.544 "is_configured": true, 00:15:09.544 "data_offset": 0, 00:15:09.544 "data_size": 65536 00:15:09.544 } 00:15:09.544 ] 00:15:09.544 }' 00:15:09.544 13:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.544 13:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.112 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:10.112 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:10.112 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:10.112 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:10.112 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:10.112 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:10.112 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:10.112 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:10.371 [2024-07-25 13:15:20.755758] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.371 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:10.371 "name": "Existed_Raid", 00:15:10.371 "aliases": [ 00:15:10.371 "bd5f0a1e-8d48-4743-a84f-221c50f6480d" 00:15:10.371 ], 00:15:10.371 "product_name": "Raid Volume", 00:15:10.371 "block_size": 512, 00:15:10.371 "num_blocks": 196608, 00:15:10.371 "uuid": "bd5f0a1e-8d48-4743-a84f-221c50f6480d", 00:15:10.371 "assigned_rate_limits": { 00:15:10.371 "rw_ios_per_sec": 0, 00:15:10.371 "rw_mbytes_per_sec": 0, 00:15:10.371 "r_mbytes_per_sec": 0, 00:15:10.371 "w_mbytes_per_sec": 0 00:15:10.371 }, 00:15:10.371 "claimed": false, 00:15:10.371 "zoned": false, 00:15:10.371 "supported_io_types": { 00:15:10.371 "read": true, 00:15:10.371 "write": true, 00:15:10.371 "unmap": true, 00:15:10.371 "flush": true, 00:15:10.371 "reset": true, 00:15:10.371 "nvme_admin": false, 00:15:10.371 "nvme_io": false, 00:15:10.371 "nvme_io_md": false, 00:15:10.371 "write_zeroes": true, 00:15:10.371 "zcopy": false, 00:15:10.371 "get_zone_info": false, 00:15:10.371 "zone_management": false, 00:15:10.371 "zone_append": false, 00:15:10.371 "compare": false, 00:15:10.371 "compare_and_write": false, 00:15:10.371 "abort": false, 00:15:10.371 "seek_hole": false, 00:15:10.371 "seek_data": false, 00:15:10.371 "copy": false, 00:15:10.371 "nvme_iov_md": false 00:15:10.371 }, 00:15:10.371 "memory_domains": [ 00:15:10.371 { 00:15:10.371 "dma_device_id": "system", 00:15:10.371 "dma_device_type": 1 00:15:10.371 }, 00:15:10.371 { 00:15:10.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.371 "dma_device_type": 2 00:15:10.371 }, 00:15:10.371 { 00:15:10.371 "dma_device_id": "system", 00:15:10.371 "dma_device_type": 1 00:15:10.371 }, 00:15:10.371 { 00:15:10.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.371 "dma_device_type": 2 00:15:10.371 }, 00:15:10.371 { 00:15:10.372 "dma_device_id": "system", 00:15:10.372 "dma_device_type": 1 00:15:10.372 }, 00:15:10.372 { 00:15:10.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.372 "dma_device_type": 2 00:15:10.372 } 00:15:10.372 ], 00:15:10.372 "driver_specific": { 00:15:10.372 "raid": { 00:15:10.372 "uuid": "bd5f0a1e-8d48-4743-a84f-221c50f6480d", 00:15:10.372 "strip_size_kb": 64, 00:15:10.372 "state": "online", 00:15:10.372 "raid_level": "concat", 00:15:10.372 "superblock": false, 00:15:10.372 "num_base_bdevs": 3, 00:15:10.372 "num_base_bdevs_discovered": 3, 00:15:10.372 "num_base_bdevs_operational": 3, 00:15:10.372 "base_bdevs_list": [ 00:15:10.372 { 00:15:10.372 "name": "BaseBdev1", 00:15:10.372 "uuid": "3350f1b2-e1c6-4f8a-94d0-a9e13fcd9210", 00:15:10.372 "is_configured": true, 00:15:10.372 "data_offset": 0, 00:15:10.372 "data_size": 65536 00:15:10.372 }, 00:15:10.372 { 00:15:10.372 "name": "BaseBdev2", 00:15:10.372 "uuid": "6b839a94-21bc-4f7e-90dd-2c80dca4b593", 00:15:10.372 "is_configured": true, 00:15:10.372 "data_offset": 0, 00:15:10.372 "data_size": 65536 00:15:10.372 }, 00:15:10.372 { 00:15:10.372 "name": "BaseBdev3", 00:15:10.372 "uuid": "3f2b1cdf-2d31-409d-91c2-4183568773f7", 00:15:10.372 "is_configured": true, 00:15:10.372 "data_offset": 0, 00:15:10.372 "data_size": 65536 00:15:10.372 } 00:15:10.372 ] 00:15:10.372 } 00:15:10.372 } 00:15:10.372 }' 00:15:10.372 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.372 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:10.372 BaseBdev2 00:15:10.372 BaseBdev3' 00:15:10.372 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:10.372 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:10.372 13:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:10.632 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:10.632 "name": "BaseBdev1", 00:15:10.632 "aliases": [ 00:15:10.632 "3350f1b2-e1c6-4f8a-94d0-a9e13fcd9210" 00:15:10.632 ], 00:15:10.632 "product_name": "Malloc disk", 00:15:10.632 "block_size": 512, 00:15:10.632 "num_blocks": 65536, 00:15:10.632 "uuid": "3350f1b2-e1c6-4f8a-94d0-a9e13fcd9210", 00:15:10.632 "assigned_rate_limits": { 00:15:10.632 "rw_ios_per_sec": 0, 00:15:10.632 "rw_mbytes_per_sec": 0, 00:15:10.632 "r_mbytes_per_sec": 0, 00:15:10.632 "w_mbytes_per_sec": 0 00:15:10.632 }, 00:15:10.632 "claimed": true, 00:15:10.632 "claim_type": "exclusive_write", 00:15:10.632 "zoned": false, 00:15:10.632 "supported_io_types": { 00:15:10.632 "read": true, 00:15:10.632 "write": true, 00:15:10.632 "unmap": true, 00:15:10.632 "flush": true, 00:15:10.632 "reset": true, 00:15:10.632 "nvme_admin": false, 00:15:10.632 "nvme_io": false, 00:15:10.632 "nvme_io_md": false, 00:15:10.632 "write_zeroes": true, 00:15:10.632 "zcopy": true, 00:15:10.632 "get_zone_info": false, 00:15:10.632 "zone_management": false, 00:15:10.632 "zone_append": false, 00:15:10.632 "compare": false, 00:15:10.632 "compare_and_write": false, 00:15:10.632 "abort": true, 00:15:10.632 "seek_hole": false, 00:15:10.632 "seek_data": false, 00:15:10.632 "copy": true, 00:15:10.632 "nvme_iov_md": false 00:15:10.632 }, 00:15:10.632 "memory_domains": [ 00:15:10.632 { 00:15:10.632 "dma_device_id": "system", 00:15:10.632 "dma_device_type": 1 00:15:10.632 }, 00:15:10.632 { 00:15:10.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.632 "dma_device_type": 2 00:15:10.632 } 00:15:10.632 ], 00:15:10.632 "driver_specific": {} 00:15:10.632 }' 00:15:10.632 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:10.632 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:10.632 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:10.632 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:10.893 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:10.893 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:10.893 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:10.893 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:10.893 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:10.893 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:10.893 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:10.893 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:10.893 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:10.893 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:10.893 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.153 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.153 "name": "BaseBdev2", 00:15:11.153 "aliases": [ 00:15:11.153 "6b839a94-21bc-4f7e-90dd-2c80dca4b593" 00:15:11.153 ], 00:15:11.153 "product_name": "Malloc disk", 00:15:11.153 "block_size": 512, 00:15:11.153 "num_blocks": 65536, 00:15:11.153 "uuid": "6b839a94-21bc-4f7e-90dd-2c80dca4b593", 00:15:11.153 "assigned_rate_limits": { 00:15:11.153 "rw_ios_per_sec": 0, 00:15:11.153 "rw_mbytes_per_sec": 0, 00:15:11.153 "r_mbytes_per_sec": 0, 00:15:11.153 "w_mbytes_per_sec": 0 00:15:11.153 }, 00:15:11.153 "claimed": true, 00:15:11.153 "claim_type": "exclusive_write", 00:15:11.153 "zoned": false, 00:15:11.153 "supported_io_types": { 00:15:11.153 "read": true, 00:15:11.153 "write": true, 00:15:11.153 "unmap": true, 00:15:11.153 "flush": true, 00:15:11.153 "reset": true, 00:15:11.153 "nvme_admin": false, 00:15:11.153 "nvme_io": false, 00:15:11.153 "nvme_io_md": false, 00:15:11.153 "write_zeroes": true, 00:15:11.153 "zcopy": true, 00:15:11.153 "get_zone_info": false, 00:15:11.153 "zone_management": false, 00:15:11.153 "zone_append": false, 00:15:11.153 "compare": false, 00:15:11.153 "compare_and_write": false, 00:15:11.153 "abort": true, 00:15:11.153 "seek_hole": false, 00:15:11.153 "seek_data": false, 00:15:11.153 "copy": true, 00:15:11.153 "nvme_iov_md": false 00:15:11.153 }, 00:15:11.153 "memory_domains": [ 00:15:11.153 { 00:15:11.153 "dma_device_id": "system", 00:15:11.153 "dma_device_type": 1 00:15:11.153 }, 00:15:11.153 { 00:15:11.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.153 "dma_device_type": 2 00:15:11.153 } 00:15:11.153 ], 00:15:11.153 "driver_specific": {} 00:15:11.153 }' 00:15:11.154 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.154 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.413 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:11.413 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.413 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.413 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.413 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.413 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.413 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.413 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.413 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.673 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.673 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.673 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:11.673 13:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.673 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.673 "name": "BaseBdev3", 00:15:11.673 "aliases": [ 00:15:11.673 "3f2b1cdf-2d31-409d-91c2-4183568773f7" 00:15:11.673 ], 00:15:11.673 "product_name": "Malloc disk", 00:15:11.673 "block_size": 512, 00:15:11.673 "num_blocks": 65536, 00:15:11.673 "uuid": "3f2b1cdf-2d31-409d-91c2-4183568773f7", 00:15:11.673 "assigned_rate_limits": { 00:15:11.673 "rw_ios_per_sec": 0, 00:15:11.673 "rw_mbytes_per_sec": 0, 00:15:11.673 "r_mbytes_per_sec": 0, 00:15:11.673 "w_mbytes_per_sec": 0 00:15:11.673 }, 00:15:11.673 "claimed": true, 00:15:11.673 "claim_type": "exclusive_write", 00:15:11.673 "zoned": false, 00:15:11.673 "supported_io_types": { 00:15:11.673 "read": true, 00:15:11.673 "write": true, 00:15:11.673 "unmap": true, 00:15:11.673 "flush": true, 00:15:11.673 "reset": true, 00:15:11.673 "nvme_admin": false, 00:15:11.673 "nvme_io": false, 00:15:11.673 "nvme_io_md": false, 00:15:11.673 "write_zeroes": true, 00:15:11.673 "zcopy": true, 00:15:11.673 "get_zone_info": false, 00:15:11.673 "zone_management": false, 00:15:11.673 "zone_append": false, 00:15:11.673 "compare": false, 00:15:11.673 "compare_and_write": false, 00:15:11.673 "abort": true, 00:15:11.673 "seek_hole": false, 00:15:11.673 "seek_data": false, 00:15:11.673 "copy": true, 00:15:11.673 "nvme_iov_md": false 00:15:11.673 }, 00:15:11.673 "memory_domains": [ 00:15:11.673 { 00:15:11.673 "dma_device_id": "system", 00:15:11.673 "dma_device_type": 1 00:15:11.673 }, 00:15:11.673 { 00:15:11.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.673 "dma_device_type": 2 00:15:11.673 } 00:15:11.673 ], 00:15:11.673 "driver_specific": {} 00:15:11.673 }' 00:15:11.673 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.934 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.934 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:11.934 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.934 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.934 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.934 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.934 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.934 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.934 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.194 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.194 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.194 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:12.454 [2024-07-25 13:15:22.712720] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.454 [2024-07-25 13:15:22.712746] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.454 [2024-07-25 13:15:22.712786] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.454 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.714 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.714 "name": "Existed_Raid", 00:15:12.714 "uuid": "bd5f0a1e-8d48-4743-a84f-221c50f6480d", 00:15:12.714 "strip_size_kb": 64, 00:15:12.714 "state": "offline", 00:15:12.714 "raid_level": "concat", 00:15:12.714 "superblock": false, 00:15:12.714 "num_base_bdevs": 3, 00:15:12.714 "num_base_bdevs_discovered": 2, 00:15:12.714 "num_base_bdevs_operational": 2, 00:15:12.714 "base_bdevs_list": [ 00:15:12.714 { 00:15:12.714 "name": null, 00:15:12.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.714 "is_configured": false, 00:15:12.714 "data_offset": 0, 00:15:12.714 "data_size": 65536 00:15:12.714 }, 00:15:12.714 { 00:15:12.714 "name": "BaseBdev2", 00:15:12.714 "uuid": "6b839a94-21bc-4f7e-90dd-2c80dca4b593", 00:15:12.714 "is_configured": true, 00:15:12.714 "data_offset": 0, 00:15:12.714 "data_size": 65536 00:15:12.714 }, 00:15:12.714 { 00:15:12.714 "name": "BaseBdev3", 00:15:12.714 "uuid": "3f2b1cdf-2d31-409d-91c2-4183568773f7", 00:15:12.714 "is_configured": true, 00:15:12.714 "data_offset": 0, 00:15:12.714 "data_size": 65536 00:15:12.714 } 00:15:12.714 ] 00:15:12.714 }' 00:15:12.714 13:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.714 13:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.284 13:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:13.284 13:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:13.284 13:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.284 13:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:13.544 13:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:13.544 13:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.544 13:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:13.544 [2024-07-25 13:15:24.008326] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:13.804 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:13.804 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:13.804 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.805 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:14.065 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:14.065 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:14.065 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:14.065 [2024-07-25 13:15:24.502058] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:14.065 [2024-07-25 13:15:24.502099] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x128e710 name Existed_Raid, state offline 00:15:14.065 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:14.065 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:14.065 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.065 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:14.325 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:14.325 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:14.325 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:15:14.325 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:14.325 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:14.325 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:14.586 BaseBdev2 00:15:14.586 13:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:14.586 13:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:14.586 13:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:14.586 13:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:14.586 13:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:14.586 13:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:14.586 13:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:14.846 13:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:15.106 [ 00:15:15.106 { 00:15:15.106 "name": "BaseBdev2", 00:15:15.106 "aliases": [ 00:15:15.106 "9b17654a-1aa3-46dc-b16e-8c64e4293360" 00:15:15.106 ], 00:15:15.106 "product_name": "Malloc disk", 00:15:15.107 "block_size": 512, 00:15:15.107 "num_blocks": 65536, 00:15:15.107 "uuid": "9b17654a-1aa3-46dc-b16e-8c64e4293360", 00:15:15.107 "assigned_rate_limits": { 00:15:15.107 "rw_ios_per_sec": 0, 00:15:15.107 "rw_mbytes_per_sec": 0, 00:15:15.107 "r_mbytes_per_sec": 0, 00:15:15.107 "w_mbytes_per_sec": 0 00:15:15.107 }, 00:15:15.107 "claimed": false, 00:15:15.107 "zoned": false, 00:15:15.107 "supported_io_types": { 00:15:15.107 "read": true, 00:15:15.107 "write": true, 00:15:15.107 "unmap": true, 00:15:15.107 "flush": true, 00:15:15.107 "reset": true, 00:15:15.107 "nvme_admin": false, 00:15:15.107 "nvme_io": false, 00:15:15.107 "nvme_io_md": false, 00:15:15.107 "write_zeroes": true, 00:15:15.107 "zcopy": true, 00:15:15.107 "get_zone_info": false, 00:15:15.107 "zone_management": false, 00:15:15.107 "zone_append": false, 00:15:15.107 "compare": false, 00:15:15.107 "compare_and_write": false, 00:15:15.107 "abort": true, 00:15:15.107 "seek_hole": false, 00:15:15.107 "seek_data": false, 00:15:15.107 "copy": true, 00:15:15.107 "nvme_iov_md": false 00:15:15.107 }, 00:15:15.107 "memory_domains": [ 00:15:15.107 { 00:15:15.107 "dma_device_id": "system", 00:15:15.107 "dma_device_type": 1 00:15:15.107 }, 00:15:15.107 { 00:15:15.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.107 "dma_device_type": 2 00:15:15.107 } 00:15:15.107 ], 00:15:15.107 "driver_specific": {} 00:15:15.107 } 00:15:15.107 ] 00:15:15.107 13:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:15.107 13:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:15.107 13:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:15.107 13:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:15.368 BaseBdev3 00:15:15.368 13:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:15.368 13:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:15.368 13:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:15.368 13:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:15.368 13:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:15.368 13:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:15.368 13:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.628 13:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:15.889 [ 00:15:15.889 { 00:15:15.889 "name": "BaseBdev3", 00:15:15.889 "aliases": [ 00:15:15.889 "1ba8985d-d314-4b81-a16e-41b8aea34032" 00:15:15.889 ], 00:15:15.889 "product_name": "Malloc disk", 00:15:15.889 "block_size": 512, 00:15:15.889 "num_blocks": 65536, 00:15:15.889 "uuid": "1ba8985d-d314-4b81-a16e-41b8aea34032", 00:15:15.889 "assigned_rate_limits": { 00:15:15.889 "rw_ios_per_sec": 0, 00:15:15.889 "rw_mbytes_per_sec": 0, 00:15:15.889 "r_mbytes_per_sec": 0, 00:15:15.889 "w_mbytes_per_sec": 0 00:15:15.889 }, 00:15:15.889 "claimed": false, 00:15:15.889 "zoned": false, 00:15:15.889 "supported_io_types": { 00:15:15.889 "read": true, 00:15:15.889 "write": true, 00:15:15.889 "unmap": true, 00:15:15.889 "flush": true, 00:15:15.889 "reset": true, 00:15:15.889 "nvme_admin": false, 00:15:15.889 "nvme_io": false, 00:15:15.889 "nvme_io_md": false, 00:15:15.889 "write_zeroes": true, 00:15:15.889 "zcopy": true, 00:15:15.889 "get_zone_info": false, 00:15:15.889 "zone_management": false, 00:15:15.889 "zone_append": false, 00:15:15.889 "compare": false, 00:15:15.889 "compare_and_write": false, 00:15:15.889 "abort": true, 00:15:15.889 "seek_hole": false, 00:15:15.889 "seek_data": false, 00:15:15.889 "copy": true, 00:15:15.889 "nvme_iov_md": false 00:15:15.889 }, 00:15:15.889 "memory_domains": [ 00:15:15.889 { 00:15:15.889 "dma_device_id": "system", 00:15:15.889 "dma_device_type": 1 00:15:15.889 }, 00:15:15.889 { 00:15:15.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.889 "dma_device_type": 2 00:15:15.889 } 00:15:15.889 ], 00:15:15.889 "driver_specific": {} 00:15:15.889 } 00:15:15.889 ] 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:15.889 [2024-07-25 13:15:26.344593] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.889 [2024-07-25 13:15:26.344634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.889 [2024-07-25 13:15:26.344652] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.889 [2024-07-25 13:15:26.346049] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.889 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.149 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.149 "name": "Existed_Raid", 00:15:16.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.149 "strip_size_kb": 64, 00:15:16.149 "state": "configuring", 00:15:16.149 "raid_level": "concat", 00:15:16.149 "superblock": false, 00:15:16.149 "num_base_bdevs": 3, 00:15:16.149 "num_base_bdevs_discovered": 2, 00:15:16.149 "num_base_bdevs_operational": 3, 00:15:16.149 "base_bdevs_list": [ 00:15:16.149 { 00:15:16.149 "name": "BaseBdev1", 00:15:16.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.149 "is_configured": false, 00:15:16.149 "data_offset": 0, 00:15:16.149 "data_size": 0 00:15:16.149 }, 00:15:16.149 { 00:15:16.149 "name": "BaseBdev2", 00:15:16.149 "uuid": "9b17654a-1aa3-46dc-b16e-8c64e4293360", 00:15:16.149 "is_configured": true, 00:15:16.149 "data_offset": 0, 00:15:16.149 "data_size": 65536 00:15:16.149 }, 00:15:16.149 { 00:15:16.149 "name": "BaseBdev3", 00:15:16.149 "uuid": "1ba8985d-d314-4b81-a16e-41b8aea34032", 00:15:16.149 "is_configured": true, 00:15:16.149 "data_offset": 0, 00:15:16.149 "data_size": 65536 00:15:16.149 } 00:15:16.149 ] 00:15:16.149 }' 00:15:16.149 13:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.149 13:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.761 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:17.021 [2024-07-25 13:15:27.395338] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:17.021 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:17.021 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:17.021 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:17.021 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:17.021 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:17.021 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:17.021 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:17.021 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:17.021 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:17.021 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:17.021 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.021 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.285 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:17.285 "name": "Existed_Raid", 00:15:17.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.285 "strip_size_kb": 64, 00:15:17.285 "state": "configuring", 00:15:17.285 "raid_level": "concat", 00:15:17.285 "superblock": false, 00:15:17.285 "num_base_bdevs": 3, 00:15:17.285 "num_base_bdevs_discovered": 1, 00:15:17.285 "num_base_bdevs_operational": 3, 00:15:17.285 "base_bdevs_list": [ 00:15:17.285 { 00:15:17.285 "name": "BaseBdev1", 00:15:17.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.285 "is_configured": false, 00:15:17.285 "data_offset": 0, 00:15:17.285 "data_size": 0 00:15:17.285 }, 00:15:17.285 { 00:15:17.285 "name": null, 00:15:17.285 "uuid": "9b17654a-1aa3-46dc-b16e-8c64e4293360", 00:15:17.285 "is_configured": false, 00:15:17.285 "data_offset": 0, 00:15:17.285 "data_size": 65536 00:15:17.285 }, 00:15:17.285 { 00:15:17.285 "name": "BaseBdev3", 00:15:17.285 "uuid": "1ba8985d-d314-4b81-a16e-41b8aea34032", 00:15:17.285 "is_configured": true, 00:15:17.285 "data_offset": 0, 00:15:17.285 "data_size": 65536 00:15:17.285 } 00:15:17.285 ] 00:15:17.285 }' 00:15:17.285 13:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:17.285 13:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.855 13:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.855 13:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:18.115 13:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:18.115 13:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:18.376 [2024-07-25 13:15:28.666525] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.376 BaseBdev1 00:15:18.376 13:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:18.376 13:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:18.376 13:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:18.376 13:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:18.376 13:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:18.376 13:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:18.376 13:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:18.636 13:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:18.636 [ 00:15:18.636 { 00:15:18.636 "name": "BaseBdev1", 00:15:18.636 "aliases": [ 00:15:18.636 "5ca2b91c-e603-4e83-a90f-034d8615e506" 00:15:18.636 ], 00:15:18.636 "product_name": "Malloc disk", 00:15:18.636 "block_size": 512, 00:15:18.636 "num_blocks": 65536, 00:15:18.636 "uuid": "5ca2b91c-e603-4e83-a90f-034d8615e506", 00:15:18.636 "assigned_rate_limits": { 00:15:18.636 "rw_ios_per_sec": 0, 00:15:18.636 "rw_mbytes_per_sec": 0, 00:15:18.636 "r_mbytes_per_sec": 0, 00:15:18.636 "w_mbytes_per_sec": 0 00:15:18.636 }, 00:15:18.636 "claimed": true, 00:15:18.636 "claim_type": "exclusive_write", 00:15:18.636 "zoned": false, 00:15:18.636 "supported_io_types": { 00:15:18.636 "read": true, 00:15:18.636 "write": true, 00:15:18.636 "unmap": true, 00:15:18.636 "flush": true, 00:15:18.636 "reset": true, 00:15:18.636 "nvme_admin": false, 00:15:18.636 "nvme_io": false, 00:15:18.636 "nvme_io_md": false, 00:15:18.636 "write_zeroes": true, 00:15:18.636 "zcopy": true, 00:15:18.636 "get_zone_info": false, 00:15:18.636 "zone_management": false, 00:15:18.636 "zone_append": false, 00:15:18.636 "compare": false, 00:15:18.636 "compare_and_write": false, 00:15:18.636 "abort": true, 00:15:18.636 "seek_hole": false, 00:15:18.636 "seek_data": false, 00:15:18.636 "copy": true, 00:15:18.636 "nvme_iov_md": false 00:15:18.636 }, 00:15:18.636 "memory_domains": [ 00:15:18.636 { 00:15:18.636 "dma_device_id": "system", 00:15:18.636 "dma_device_type": 1 00:15:18.636 }, 00:15:18.636 { 00:15:18.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.636 "dma_device_type": 2 00:15:18.636 } 00:15:18.636 ], 00:15:18.637 "driver_specific": {} 00:15:18.637 } 00:15:18.637 ] 00:15:18.896 13:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:18.896 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:18.896 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.896 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:18.896 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.896 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.897 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:18.897 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.897 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.897 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.897 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.897 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.897 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.897 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.897 "name": "Existed_Raid", 00:15:18.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.897 "strip_size_kb": 64, 00:15:18.897 "state": "configuring", 00:15:18.897 "raid_level": "concat", 00:15:18.897 "superblock": false, 00:15:18.897 "num_base_bdevs": 3, 00:15:18.897 "num_base_bdevs_discovered": 2, 00:15:18.897 "num_base_bdevs_operational": 3, 00:15:18.897 "base_bdevs_list": [ 00:15:18.897 { 00:15:18.897 "name": "BaseBdev1", 00:15:18.897 "uuid": "5ca2b91c-e603-4e83-a90f-034d8615e506", 00:15:18.897 "is_configured": true, 00:15:18.897 "data_offset": 0, 00:15:18.897 "data_size": 65536 00:15:18.897 }, 00:15:18.897 { 00:15:18.897 "name": null, 00:15:18.897 "uuid": "9b17654a-1aa3-46dc-b16e-8c64e4293360", 00:15:18.897 "is_configured": false, 00:15:18.897 "data_offset": 0, 00:15:18.897 "data_size": 65536 00:15:18.897 }, 00:15:18.897 { 00:15:18.897 "name": "BaseBdev3", 00:15:18.897 "uuid": "1ba8985d-d314-4b81-a16e-41b8aea34032", 00:15:18.897 "is_configured": true, 00:15:18.897 "data_offset": 0, 00:15:18.897 "data_size": 65536 00:15:18.897 } 00:15:18.897 ] 00:15:18.897 }' 00:15:18.897 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.897 13:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.837 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:19.837 13:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.097 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:20.097 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:20.358 [2024-07-25 13:15:30.679838] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:20.358 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:20.358 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.358 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:20.358 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:20.358 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.358 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:20.358 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.358 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.358 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.358 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.358 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.358 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.618 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.618 "name": "Existed_Raid", 00:15:20.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.618 "strip_size_kb": 64, 00:15:20.618 "state": "configuring", 00:15:20.618 "raid_level": "concat", 00:15:20.618 "superblock": false, 00:15:20.618 "num_base_bdevs": 3, 00:15:20.618 "num_base_bdevs_discovered": 1, 00:15:20.618 "num_base_bdevs_operational": 3, 00:15:20.618 "base_bdevs_list": [ 00:15:20.618 { 00:15:20.618 "name": "BaseBdev1", 00:15:20.618 "uuid": "5ca2b91c-e603-4e83-a90f-034d8615e506", 00:15:20.618 "is_configured": true, 00:15:20.618 "data_offset": 0, 00:15:20.618 "data_size": 65536 00:15:20.618 }, 00:15:20.618 { 00:15:20.618 "name": null, 00:15:20.618 "uuid": "9b17654a-1aa3-46dc-b16e-8c64e4293360", 00:15:20.618 "is_configured": false, 00:15:20.618 "data_offset": 0, 00:15:20.618 "data_size": 65536 00:15:20.618 }, 00:15:20.618 { 00:15:20.618 "name": null, 00:15:20.618 "uuid": "1ba8985d-d314-4b81-a16e-41b8aea34032", 00:15:20.618 "is_configured": false, 00:15:20.618 "data_offset": 0, 00:15:20.618 "data_size": 65536 00:15:20.618 } 00:15:20.618 ] 00:15:20.618 }' 00:15:20.618 13:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.618 13:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.188 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.188 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:21.447 [2024-07-25 13:15:31.887047] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.447 13:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.707 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.707 "name": "Existed_Raid", 00:15:21.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.707 "strip_size_kb": 64, 00:15:21.707 "state": "configuring", 00:15:21.707 "raid_level": "concat", 00:15:21.707 "superblock": false, 00:15:21.707 "num_base_bdevs": 3, 00:15:21.707 "num_base_bdevs_discovered": 2, 00:15:21.707 "num_base_bdevs_operational": 3, 00:15:21.707 "base_bdevs_list": [ 00:15:21.707 { 00:15:21.707 "name": "BaseBdev1", 00:15:21.707 "uuid": "5ca2b91c-e603-4e83-a90f-034d8615e506", 00:15:21.707 "is_configured": true, 00:15:21.707 "data_offset": 0, 00:15:21.707 "data_size": 65536 00:15:21.707 }, 00:15:21.707 { 00:15:21.707 "name": null, 00:15:21.707 "uuid": "9b17654a-1aa3-46dc-b16e-8c64e4293360", 00:15:21.707 "is_configured": false, 00:15:21.707 "data_offset": 0, 00:15:21.707 "data_size": 65536 00:15:21.707 }, 00:15:21.707 { 00:15:21.707 "name": "BaseBdev3", 00:15:21.707 "uuid": "1ba8985d-d314-4b81-a16e-41b8aea34032", 00:15:21.707 "is_configured": true, 00:15:21.707 "data_offset": 0, 00:15:21.707 "data_size": 65536 00:15:21.707 } 00:15:21.707 ] 00:15:21.707 }' 00:15:21.707 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.707 13:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.278 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.278 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:22.278 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:22.278 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:22.538 [2024-07-25 13:15:32.865638] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:22.538 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:22.538 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.538 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:22.538 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:22.538 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.538 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:22.538 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.538 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.538 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.538 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.538 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.538 13:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.798 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.798 "name": "Existed_Raid", 00:15:22.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.798 "strip_size_kb": 64, 00:15:22.798 "state": "configuring", 00:15:22.798 "raid_level": "concat", 00:15:22.798 "superblock": false, 00:15:22.798 "num_base_bdevs": 3, 00:15:22.798 "num_base_bdevs_discovered": 1, 00:15:22.798 "num_base_bdevs_operational": 3, 00:15:22.798 "base_bdevs_list": [ 00:15:22.798 { 00:15:22.798 "name": null, 00:15:22.798 "uuid": "5ca2b91c-e603-4e83-a90f-034d8615e506", 00:15:22.798 "is_configured": false, 00:15:22.798 "data_offset": 0, 00:15:22.798 "data_size": 65536 00:15:22.798 }, 00:15:22.798 { 00:15:22.798 "name": null, 00:15:22.798 "uuid": "9b17654a-1aa3-46dc-b16e-8c64e4293360", 00:15:22.798 "is_configured": false, 00:15:22.798 "data_offset": 0, 00:15:22.798 "data_size": 65536 00:15:22.798 }, 00:15:22.798 { 00:15:22.798 "name": "BaseBdev3", 00:15:22.798 "uuid": "1ba8985d-d314-4b81-a16e-41b8aea34032", 00:15:22.798 "is_configured": true, 00:15:22.798 "data_offset": 0, 00:15:22.798 "data_size": 65536 00:15:22.798 } 00:15:22.798 ] 00:15:22.798 }' 00:15:22.798 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.798 13:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.376 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:23.376 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.376 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:23.376 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:23.636 [2024-07-25 13:15:33.957309] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.636 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:23.636 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.636 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:23.636 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:23.636 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:23.636 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:23.636 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.636 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.636 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.636 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.636 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.636 13:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.895 13:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.895 "name": "Existed_Raid", 00:15:23.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.895 "strip_size_kb": 64, 00:15:23.895 "state": "configuring", 00:15:23.895 "raid_level": "concat", 00:15:23.895 "superblock": false, 00:15:23.895 "num_base_bdevs": 3, 00:15:23.895 "num_base_bdevs_discovered": 2, 00:15:23.895 "num_base_bdevs_operational": 3, 00:15:23.895 "base_bdevs_list": [ 00:15:23.895 { 00:15:23.895 "name": null, 00:15:23.895 "uuid": "5ca2b91c-e603-4e83-a90f-034d8615e506", 00:15:23.895 "is_configured": false, 00:15:23.895 "data_offset": 0, 00:15:23.895 "data_size": 65536 00:15:23.895 }, 00:15:23.895 { 00:15:23.895 "name": "BaseBdev2", 00:15:23.895 "uuid": "9b17654a-1aa3-46dc-b16e-8c64e4293360", 00:15:23.895 "is_configured": true, 00:15:23.895 "data_offset": 0, 00:15:23.895 "data_size": 65536 00:15:23.895 }, 00:15:23.895 { 00:15:23.895 "name": "BaseBdev3", 00:15:23.895 "uuid": "1ba8985d-d314-4b81-a16e-41b8aea34032", 00:15:23.895 "is_configured": true, 00:15:23.895 "data_offset": 0, 00:15:23.895 "data_size": 65536 00:15:23.895 } 00:15:23.895 ] 00:15:23.895 }' 00:15:23.895 13:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.895 13:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.464 13:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.464 13:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:24.464 13:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:24.464 13:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.464 13:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:24.724 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5ca2b91c-e603-4e83-a90f-034d8615e506 00:15:24.984 [2024-07-25 13:15:35.245186] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:24.984 [2024-07-25 13:15:35.245222] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x128e380 00:15:24.984 [2024-07-25 13:15:35.245230] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:24.984 [2024-07-25 13:15:35.245421] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x128cf60 00:15:24.984 [2024-07-25 13:15:35.245535] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x128e380 00:15:24.984 [2024-07-25 13:15:35.245544] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x128e380 00:15:24.984 [2024-07-25 13:15:35.245703] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.984 NewBaseBdev 00:15:24.984 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:24.984 13:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:24.984 13:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:24.984 13:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:24.984 13:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:24.984 13:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:24.984 13:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:24.984 13:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:25.243 [ 00:15:25.244 { 00:15:25.244 "name": "NewBaseBdev", 00:15:25.244 "aliases": [ 00:15:25.244 "5ca2b91c-e603-4e83-a90f-034d8615e506" 00:15:25.244 ], 00:15:25.244 "product_name": "Malloc disk", 00:15:25.244 "block_size": 512, 00:15:25.244 "num_blocks": 65536, 00:15:25.244 "uuid": "5ca2b91c-e603-4e83-a90f-034d8615e506", 00:15:25.244 "assigned_rate_limits": { 00:15:25.244 "rw_ios_per_sec": 0, 00:15:25.244 "rw_mbytes_per_sec": 0, 00:15:25.244 "r_mbytes_per_sec": 0, 00:15:25.244 "w_mbytes_per_sec": 0 00:15:25.244 }, 00:15:25.244 "claimed": true, 00:15:25.244 "claim_type": "exclusive_write", 00:15:25.244 "zoned": false, 00:15:25.244 "supported_io_types": { 00:15:25.244 "read": true, 00:15:25.244 "write": true, 00:15:25.244 "unmap": true, 00:15:25.244 "flush": true, 00:15:25.244 "reset": true, 00:15:25.244 "nvme_admin": false, 00:15:25.244 "nvme_io": false, 00:15:25.244 "nvme_io_md": false, 00:15:25.244 "write_zeroes": true, 00:15:25.244 "zcopy": true, 00:15:25.244 "get_zone_info": false, 00:15:25.244 "zone_management": false, 00:15:25.244 "zone_append": false, 00:15:25.244 "compare": false, 00:15:25.244 "compare_and_write": false, 00:15:25.244 "abort": true, 00:15:25.244 "seek_hole": false, 00:15:25.244 "seek_data": false, 00:15:25.244 "copy": true, 00:15:25.244 "nvme_iov_md": false 00:15:25.244 }, 00:15:25.244 "memory_domains": [ 00:15:25.244 { 00:15:25.244 "dma_device_id": "system", 00:15:25.244 "dma_device_type": 1 00:15:25.244 }, 00:15:25.244 { 00:15:25.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.244 "dma_device_type": 2 00:15:25.244 } 00:15:25.244 ], 00:15:25.244 "driver_specific": {} 00:15:25.244 } 00:15:25.244 ] 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.244 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.503 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.503 "name": "Existed_Raid", 00:15:25.503 "uuid": "f2cfbaa4-4abd-4e76-b211-e33b153ba42a", 00:15:25.503 "strip_size_kb": 64, 00:15:25.503 "state": "online", 00:15:25.503 "raid_level": "concat", 00:15:25.503 "superblock": false, 00:15:25.503 "num_base_bdevs": 3, 00:15:25.503 "num_base_bdevs_discovered": 3, 00:15:25.503 "num_base_bdevs_operational": 3, 00:15:25.503 "base_bdevs_list": [ 00:15:25.503 { 00:15:25.503 "name": "NewBaseBdev", 00:15:25.503 "uuid": "5ca2b91c-e603-4e83-a90f-034d8615e506", 00:15:25.503 "is_configured": true, 00:15:25.503 "data_offset": 0, 00:15:25.503 "data_size": 65536 00:15:25.503 }, 00:15:25.503 { 00:15:25.503 "name": "BaseBdev2", 00:15:25.503 "uuid": "9b17654a-1aa3-46dc-b16e-8c64e4293360", 00:15:25.503 "is_configured": true, 00:15:25.503 "data_offset": 0, 00:15:25.503 "data_size": 65536 00:15:25.503 }, 00:15:25.503 { 00:15:25.503 "name": "BaseBdev3", 00:15:25.503 "uuid": "1ba8985d-d314-4b81-a16e-41b8aea34032", 00:15:25.503 "is_configured": true, 00:15:25.503 "data_offset": 0, 00:15:25.503 "data_size": 65536 00:15:25.503 } 00:15:25.503 ] 00:15:25.503 }' 00:15:25.503 13:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.503 13:15:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.071 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:26.071 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:26.071 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:26.071 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:26.071 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:26.071 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:26.071 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:26.071 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:26.071 [2024-07-25 13:15:36.528838] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.071 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:26.071 "name": "Existed_Raid", 00:15:26.071 "aliases": [ 00:15:26.071 "f2cfbaa4-4abd-4e76-b211-e33b153ba42a" 00:15:26.071 ], 00:15:26.071 "product_name": "Raid Volume", 00:15:26.071 "block_size": 512, 00:15:26.071 "num_blocks": 196608, 00:15:26.071 "uuid": "f2cfbaa4-4abd-4e76-b211-e33b153ba42a", 00:15:26.071 "assigned_rate_limits": { 00:15:26.071 "rw_ios_per_sec": 0, 00:15:26.071 "rw_mbytes_per_sec": 0, 00:15:26.071 "r_mbytes_per_sec": 0, 00:15:26.071 "w_mbytes_per_sec": 0 00:15:26.071 }, 00:15:26.071 "claimed": false, 00:15:26.071 "zoned": false, 00:15:26.071 "supported_io_types": { 00:15:26.071 "read": true, 00:15:26.071 "write": true, 00:15:26.071 "unmap": true, 00:15:26.071 "flush": true, 00:15:26.071 "reset": true, 00:15:26.071 "nvme_admin": false, 00:15:26.071 "nvme_io": false, 00:15:26.071 "nvme_io_md": false, 00:15:26.071 "write_zeroes": true, 00:15:26.071 "zcopy": false, 00:15:26.071 "get_zone_info": false, 00:15:26.071 "zone_management": false, 00:15:26.071 "zone_append": false, 00:15:26.071 "compare": false, 00:15:26.071 "compare_and_write": false, 00:15:26.071 "abort": false, 00:15:26.071 "seek_hole": false, 00:15:26.072 "seek_data": false, 00:15:26.072 "copy": false, 00:15:26.072 "nvme_iov_md": false 00:15:26.072 }, 00:15:26.072 "memory_domains": [ 00:15:26.072 { 00:15:26.072 "dma_device_id": "system", 00:15:26.072 "dma_device_type": 1 00:15:26.072 }, 00:15:26.072 { 00:15:26.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.072 "dma_device_type": 2 00:15:26.072 }, 00:15:26.072 { 00:15:26.072 "dma_device_id": "system", 00:15:26.072 "dma_device_type": 1 00:15:26.072 }, 00:15:26.072 { 00:15:26.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.072 "dma_device_type": 2 00:15:26.072 }, 00:15:26.072 { 00:15:26.072 "dma_device_id": "system", 00:15:26.072 "dma_device_type": 1 00:15:26.072 }, 00:15:26.072 { 00:15:26.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.072 "dma_device_type": 2 00:15:26.072 } 00:15:26.072 ], 00:15:26.072 "driver_specific": { 00:15:26.072 "raid": { 00:15:26.072 "uuid": "f2cfbaa4-4abd-4e76-b211-e33b153ba42a", 00:15:26.072 "strip_size_kb": 64, 00:15:26.072 "state": "online", 00:15:26.072 "raid_level": "concat", 00:15:26.072 "superblock": false, 00:15:26.072 "num_base_bdevs": 3, 00:15:26.072 "num_base_bdevs_discovered": 3, 00:15:26.072 "num_base_bdevs_operational": 3, 00:15:26.072 "base_bdevs_list": [ 00:15:26.072 { 00:15:26.072 "name": "NewBaseBdev", 00:15:26.072 "uuid": "5ca2b91c-e603-4e83-a90f-034d8615e506", 00:15:26.072 "is_configured": true, 00:15:26.072 "data_offset": 0, 00:15:26.072 "data_size": 65536 00:15:26.072 }, 00:15:26.072 { 00:15:26.072 "name": "BaseBdev2", 00:15:26.072 "uuid": "9b17654a-1aa3-46dc-b16e-8c64e4293360", 00:15:26.072 "is_configured": true, 00:15:26.072 "data_offset": 0, 00:15:26.072 "data_size": 65536 00:15:26.072 }, 00:15:26.072 { 00:15:26.072 "name": "BaseBdev3", 00:15:26.072 "uuid": "1ba8985d-d314-4b81-a16e-41b8aea34032", 00:15:26.072 "is_configured": true, 00:15:26.072 "data_offset": 0, 00:15:26.072 "data_size": 65536 00:15:26.072 } 00:15:26.072 ] 00:15:26.072 } 00:15:26.072 } 00:15:26.072 }' 00:15:26.072 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.332 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:26.332 BaseBdev2 00:15:26.332 BaseBdev3' 00:15:26.332 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:26.332 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:26.332 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:26.332 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:26.332 "name": "NewBaseBdev", 00:15:26.332 "aliases": [ 00:15:26.332 "5ca2b91c-e603-4e83-a90f-034d8615e506" 00:15:26.332 ], 00:15:26.332 "product_name": "Malloc disk", 00:15:26.332 "block_size": 512, 00:15:26.332 "num_blocks": 65536, 00:15:26.332 "uuid": "5ca2b91c-e603-4e83-a90f-034d8615e506", 00:15:26.332 "assigned_rate_limits": { 00:15:26.332 "rw_ios_per_sec": 0, 00:15:26.332 "rw_mbytes_per_sec": 0, 00:15:26.332 "r_mbytes_per_sec": 0, 00:15:26.332 "w_mbytes_per_sec": 0 00:15:26.332 }, 00:15:26.332 "claimed": true, 00:15:26.332 "claim_type": "exclusive_write", 00:15:26.332 "zoned": false, 00:15:26.332 "supported_io_types": { 00:15:26.332 "read": true, 00:15:26.332 "write": true, 00:15:26.332 "unmap": true, 00:15:26.332 "flush": true, 00:15:26.332 "reset": true, 00:15:26.332 "nvme_admin": false, 00:15:26.332 "nvme_io": false, 00:15:26.332 "nvme_io_md": false, 00:15:26.332 "write_zeroes": true, 00:15:26.332 "zcopy": true, 00:15:26.332 "get_zone_info": false, 00:15:26.332 "zone_management": false, 00:15:26.332 "zone_append": false, 00:15:26.332 "compare": false, 00:15:26.332 "compare_and_write": false, 00:15:26.332 "abort": true, 00:15:26.332 "seek_hole": false, 00:15:26.332 "seek_data": false, 00:15:26.332 "copy": true, 00:15:26.332 "nvme_iov_md": false 00:15:26.332 }, 00:15:26.332 "memory_domains": [ 00:15:26.332 { 00:15:26.332 "dma_device_id": "system", 00:15:26.332 "dma_device_type": 1 00:15:26.332 }, 00:15:26.332 { 00:15:26.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.332 "dma_device_type": 2 00:15:26.332 } 00:15:26.332 ], 00:15:26.332 "driver_specific": {} 00:15:26.332 }' 00:15:26.332 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.591 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.591 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:26.591 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.591 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.591 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:26.591 13:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.591 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.591 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.591 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.851 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.851 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.851 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:26.851 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:26.851 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:27.110 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:27.110 "name": "BaseBdev2", 00:15:27.110 "aliases": [ 00:15:27.110 "9b17654a-1aa3-46dc-b16e-8c64e4293360" 00:15:27.110 ], 00:15:27.110 "product_name": "Malloc disk", 00:15:27.110 "block_size": 512, 00:15:27.110 "num_blocks": 65536, 00:15:27.110 "uuid": "9b17654a-1aa3-46dc-b16e-8c64e4293360", 00:15:27.110 "assigned_rate_limits": { 00:15:27.110 "rw_ios_per_sec": 0, 00:15:27.110 "rw_mbytes_per_sec": 0, 00:15:27.110 "r_mbytes_per_sec": 0, 00:15:27.110 "w_mbytes_per_sec": 0 00:15:27.110 }, 00:15:27.110 "claimed": true, 00:15:27.110 "claim_type": "exclusive_write", 00:15:27.110 "zoned": false, 00:15:27.110 "supported_io_types": { 00:15:27.110 "read": true, 00:15:27.110 "write": true, 00:15:27.110 "unmap": true, 00:15:27.110 "flush": true, 00:15:27.110 "reset": true, 00:15:27.110 "nvme_admin": false, 00:15:27.110 "nvme_io": false, 00:15:27.110 "nvme_io_md": false, 00:15:27.110 "write_zeroes": true, 00:15:27.110 "zcopy": true, 00:15:27.110 "get_zone_info": false, 00:15:27.110 "zone_management": false, 00:15:27.110 "zone_append": false, 00:15:27.110 "compare": false, 00:15:27.110 "compare_and_write": false, 00:15:27.110 "abort": true, 00:15:27.110 "seek_hole": false, 00:15:27.110 "seek_data": false, 00:15:27.110 "copy": true, 00:15:27.110 "nvme_iov_md": false 00:15:27.110 }, 00:15:27.110 "memory_domains": [ 00:15:27.110 { 00:15:27.110 "dma_device_id": "system", 00:15:27.110 "dma_device_type": 1 00:15:27.110 }, 00:15:27.110 { 00:15:27.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.110 "dma_device_type": 2 00:15:27.110 } 00:15:27.110 ], 00:15:27.110 "driver_specific": {} 00:15:27.111 }' 00:15:27.111 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.111 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.111 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:27.111 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.111 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.111 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:27.111 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.111 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.370 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:27.370 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.370 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.370 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:27.370 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:27.370 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:27.370 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:27.629 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:27.629 "name": "BaseBdev3", 00:15:27.629 "aliases": [ 00:15:27.629 "1ba8985d-d314-4b81-a16e-41b8aea34032" 00:15:27.629 ], 00:15:27.629 "product_name": "Malloc disk", 00:15:27.629 "block_size": 512, 00:15:27.629 "num_blocks": 65536, 00:15:27.629 "uuid": "1ba8985d-d314-4b81-a16e-41b8aea34032", 00:15:27.629 "assigned_rate_limits": { 00:15:27.629 "rw_ios_per_sec": 0, 00:15:27.629 "rw_mbytes_per_sec": 0, 00:15:27.629 "r_mbytes_per_sec": 0, 00:15:27.629 "w_mbytes_per_sec": 0 00:15:27.629 }, 00:15:27.629 "claimed": true, 00:15:27.629 "claim_type": "exclusive_write", 00:15:27.629 "zoned": false, 00:15:27.629 "supported_io_types": { 00:15:27.629 "read": true, 00:15:27.629 "write": true, 00:15:27.629 "unmap": true, 00:15:27.629 "flush": true, 00:15:27.629 "reset": true, 00:15:27.629 "nvme_admin": false, 00:15:27.629 "nvme_io": false, 00:15:27.629 "nvme_io_md": false, 00:15:27.629 "write_zeroes": true, 00:15:27.629 "zcopy": true, 00:15:27.629 "get_zone_info": false, 00:15:27.629 "zone_management": false, 00:15:27.629 "zone_append": false, 00:15:27.629 "compare": false, 00:15:27.629 "compare_and_write": false, 00:15:27.629 "abort": true, 00:15:27.629 "seek_hole": false, 00:15:27.629 "seek_data": false, 00:15:27.629 "copy": true, 00:15:27.629 "nvme_iov_md": false 00:15:27.629 }, 00:15:27.629 "memory_domains": [ 00:15:27.629 { 00:15:27.629 "dma_device_id": "system", 00:15:27.629 "dma_device_type": 1 00:15:27.629 }, 00:15:27.629 { 00:15:27.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.629 "dma_device_type": 2 00:15:27.629 } 00:15:27.629 ], 00:15:27.629 "driver_specific": {} 00:15:27.629 }' 00:15:27.629 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.629 13:15:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.629 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:27.629 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.629 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.889 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:27.889 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.889 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.889 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:27.889 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.889 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.889 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:27.889 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:28.148 [2024-07-25 13:15:38.497786] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.148 [2024-07-25 13:15:38.497808] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.148 [2024-07-25 13:15:38.497858] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.149 [2024-07-25 13:15:38.497908] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.149 [2024-07-25 13:15:38.497920] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x128e380 name Existed_Raid, state offline 00:15:28.149 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 867347 00:15:28.149 13:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 867347 ']' 00:15:28.149 13:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 867347 00:15:28.149 13:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:28.149 13:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.149 13:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 867347 00:15:28.149 13:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.149 13:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.149 13:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 867347' 00:15:28.149 killing process with pid 867347 00:15:28.149 13:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 867347 00:15:28.149 [2024-07-25 13:15:38.570027] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.149 13:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 867347 00:15:28.149 [2024-07-25 13:15:38.611779] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:28.718 00:15:28.718 real 0m26.691s 00:15:28.718 user 0m48.844s 00:15:28.718 sys 0m4.795s 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.718 ************************************ 00:15:28.718 END TEST raid_state_function_test 00:15:28.718 ************************************ 00:15:28.718 13:15:38 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:15:28.718 13:15:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:28.718 13:15:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:28.718 13:15:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:28.718 ************************************ 00:15:28.718 START TEST raid_state_function_test_sb 00:15:28.718 ************************************ 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=872448 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 872448' 00:15:28.718 Process raid pid: 872448 00:15:28.718 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:28.719 13:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 872448 /var/tmp/spdk-raid.sock 00:15:28.719 13:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 872448 ']' 00:15:28.719 13:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:28.719 13:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.719 13:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:28.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:28.719 13:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.719 13:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.719 [2024-07-25 13:15:39.055231] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:15:28.719 [2024-07-25 13:15:39.055291] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:28.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.719 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:28.719 [2024-07-25 13:15:39.190589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.978 [2024-07-25 13:15:39.273836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.978 [2024-07-25 13:15:39.327124] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.978 [2024-07-25 13:15:39.327152] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.547 13:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:29.547 13:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:29.547 13:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:29.807 [2024-07-25 13:15:40.113112] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.807 [2024-07-25 13:15:40.113161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.807 [2024-07-25 13:15:40.113173] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.807 [2024-07-25 13:15:40.113185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.807 [2024-07-25 13:15:40.113193] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.807 [2024-07-25 13:15:40.113203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.807 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:29.807 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.807 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.807 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:29.807 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.807 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:29.807 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.807 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.807 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.807 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.807 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.807 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.066 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.066 "name": "Existed_Raid", 00:15:30.066 "uuid": "c09f5d56-a5aa-4f72-b47b-3bb0f4b4f6e9", 00:15:30.066 "strip_size_kb": 64, 00:15:30.066 "state": "configuring", 00:15:30.066 "raid_level": "concat", 00:15:30.066 "superblock": true, 00:15:30.066 "num_base_bdevs": 3, 00:15:30.066 "num_base_bdevs_discovered": 0, 00:15:30.066 "num_base_bdevs_operational": 3, 00:15:30.066 "base_bdevs_list": [ 00:15:30.066 { 00:15:30.066 "name": "BaseBdev1", 00:15:30.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.066 "is_configured": false, 00:15:30.066 "data_offset": 0, 00:15:30.066 "data_size": 0 00:15:30.066 }, 00:15:30.066 { 00:15:30.066 "name": "BaseBdev2", 00:15:30.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.066 "is_configured": false, 00:15:30.066 "data_offset": 0, 00:15:30.066 "data_size": 0 00:15:30.066 }, 00:15:30.066 { 00:15:30.066 "name": "BaseBdev3", 00:15:30.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.066 "is_configured": false, 00:15:30.066 "data_offset": 0, 00:15:30.066 "data_size": 0 00:15:30.066 } 00:15:30.066 ] 00:15:30.066 }' 00:15:30.066 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.066 13:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.635 13:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:30.894 [2024-07-25 13:15:41.163758] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.894 [2024-07-25 13:15:41.163786] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1957f40 name Existed_Raid, state configuring 00:15:30.894 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:30.894 [2024-07-25 13:15:41.340254] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.894 [2024-07-25 13:15:41.340276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.894 [2024-07-25 13:15:41.340284] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.894 [2024-07-25 13:15:41.340295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.894 [2024-07-25 13:15:41.340303] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.894 [2024-07-25 13:15:41.340313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.894 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:31.193 [2024-07-25 13:15:41.586992] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.193 BaseBdev1 00:15:31.193 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:31.193 13:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:31.193 13:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:31.193 13:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:31.193 13:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:31.193 13:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:31.193 13:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.452 13:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:31.452 [ 00:15:31.452 { 00:15:31.452 "name": "BaseBdev1", 00:15:31.452 "aliases": [ 00:15:31.452 "397cbdc5-63d4-417b-bb90-d3099333630c" 00:15:31.452 ], 00:15:31.452 "product_name": "Malloc disk", 00:15:31.452 "block_size": 512, 00:15:31.452 "num_blocks": 65536, 00:15:31.452 "uuid": "397cbdc5-63d4-417b-bb90-d3099333630c", 00:15:31.452 "assigned_rate_limits": { 00:15:31.452 "rw_ios_per_sec": 0, 00:15:31.452 "rw_mbytes_per_sec": 0, 00:15:31.452 "r_mbytes_per_sec": 0, 00:15:31.452 "w_mbytes_per_sec": 0 00:15:31.452 }, 00:15:31.452 "claimed": true, 00:15:31.452 "claim_type": "exclusive_write", 00:15:31.452 "zoned": false, 00:15:31.452 "supported_io_types": { 00:15:31.452 "read": true, 00:15:31.452 "write": true, 00:15:31.452 "unmap": true, 00:15:31.452 "flush": true, 00:15:31.452 "reset": true, 00:15:31.452 "nvme_admin": false, 00:15:31.452 "nvme_io": false, 00:15:31.452 "nvme_io_md": false, 00:15:31.452 "write_zeroes": true, 00:15:31.452 "zcopy": true, 00:15:31.452 "get_zone_info": false, 00:15:31.452 "zone_management": false, 00:15:31.452 "zone_append": false, 00:15:31.452 "compare": false, 00:15:31.452 "compare_and_write": false, 00:15:31.452 "abort": true, 00:15:31.452 "seek_hole": false, 00:15:31.452 "seek_data": false, 00:15:31.452 "copy": true, 00:15:31.452 "nvme_iov_md": false 00:15:31.452 }, 00:15:31.452 "memory_domains": [ 00:15:31.452 { 00:15:31.452 "dma_device_id": "system", 00:15:31.452 "dma_device_type": 1 00:15:31.452 }, 00:15:31.452 { 00:15:31.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.452 "dma_device_type": 2 00:15:31.452 } 00:15:31.452 ], 00:15:31.452 "driver_specific": {} 00:15:31.452 } 00:15:31.452 ] 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.711 13:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.711 13:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.711 "name": "Existed_Raid", 00:15:31.711 "uuid": "1e5029a8-a994-4338-8df8-0aec10148a7c", 00:15:31.711 "strip_size_kb": 64, 00:15:31.711 "state": "configuring", 00:15:31.711 "raid_level": "concat", 00:15:31.711 "superblock": true, 00:15:31.711 "num_base_bdevs": 3, 00:15:31.711 "num_base_bdevs_discovered": 1, 00:15:31.711 "num_base_bdevs_operational": 3, 00:15:31.711 "base_bdevs_list": [ 00:15:31.711 { 00:15:31.711 "name": "BaseBdev1", 00:15:31.711 "uuid": "397cbdc5-63d4-417b-bb90-d3099333630c", 00:15:31.711 "is_configured": true, 00:15:31.711 "data_offset": 2048, 00:15:31.711 "data_size": 63488 00:15:31.711 }, 00:15:31.711 { 00:15:31.711 "name": "BaseBdev2", 00:15:31.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.711 "is_configured": false, 00:15:31.711 "data_offset": 0, 00:15:31.711 "data_size": 0 00:15:31.711 }, 00:15:31.711 { 00:15:31.711 "name": "BaseBdev3", 00:15:31.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.711 "is_configured": false, 00:15:31.711 "data_offset": 0, 00:15:31.711 "data_size": 0 00:15:31.711 } 00:15:31.711 ] 00:15:31.711 }' 00:15:31.711 13:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.711 13:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.279 13:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:32.538 [2024-07-25 13:15:42.942560] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.538 [2024-07-25 13:15:42.942592] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1957810 name Existed_Raid, state configuring 00:15:32.538 13:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:32.796 [2024-07-25 13:15:43.183237] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.796 [2024-07-25 13:15:43.184651] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.796 [2024-07-25 13:15:43.184680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.796 [2024-07-25 13:15:43.184694] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.796 [2024-07-25 13:15:43.184705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.796 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.054 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.054 "name": "Existed_Raid", 00:15:33.054 "uuid": "bb40a97a-eabb-4f56-ad4a-37818c122f98", 00:15:33.054 "strip_size_kb": 64, 00:15:33.054 "state": "configuring", 00:15:33.054 "raid_level": "concat", 00:15:33.054 "superblock": true, 00:15:33.054 "num_base_bdevs": 3, 00:15:33.054 "num_base_bdevs_discovered": 1, 00:15:33.054 "num_base_bdevs_operational": 3, 00:15:33.054 "base_bdevs_list": [ 00:15:33.054 { 00:15:33.054 "name": "BaseBdev1", 00:15:33.054 "uuid": "397cbdc5-63d4-417b-bb90-d3099333630c", 00:15:33.054 "is_configured": true, 00:15:33.054 "data_offset": 2048, 00:15:33.054 "data_size": 63488 00:15:33.054 }, 00:15:33.054 { 00:15:33.054 "name": "BaseBdev2", 00:15:33.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.054 "is_configured": false, 00:15:33.054 "data_offset": 0, 00:15:33.054 "data_size": 0 00:15:33.054 }, 00:15:33.054 { 00:15:33.054 "name": "BaseBdev3", 00:15:33.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.054 "is_configured": false, 00:15:33.054 "data_offset": 0, 00:15:33.054 "data_size": 0 00:15:33.054 } 00:15:33.054 ] 00:15:33.054 }' 00:15:33.054 13:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.054 13:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.621 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:33.880 [2024-07-25 13:15:44.253828] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.880 BaseBdev2 00:15:33.880 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:33.880 13:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:33.880 13:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:33.880 13:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:33.880 13:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:33.880 13:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:33.880 13:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:34.139 13:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.398 [ 00:15:34.398 { 00:15:34.398 "name": "BaseBdev2", 00:15:34.398 "aliases": [ 00:15:34.398 "f03dcaa0-8f8a-4e26-a5f5-7062a76bb5df" 00:15:34.398 ], 00:15:34.398 "product_name": "Malloc disk", 00:15:34.398 "block_size": 512, 00:15:34.398 "num_blocks": 65536, 00:15:34.398 "uuid": "f03dcaa0-8f8a-4e26-a5f5-7062a76bb5df", 00:15:34.398 "assigned_rate_limits": { 00:15:34.398 "rw_ios_per_sec": 0, 00:15:34.398 "rw_mbytes_per_sec": 0, 00:15:34.398 "r_mbytes_per_sec": 0, 00:15:34.398 "w_mbytes_per_sec": 0 00:15:34.398 }, 00:15:34.398 "claimed": true, 00:15:34.398 "claim_type": "exclusive_write", 00:15:34.398 "zoned": false, 00:15:34.398 "supported_io_types": { 00:15:34.398 "read": true, 00:15:34.398 "write": true, 00:15:34.398 "unmap": true, 00:15:34.398 "flush": true, 00:15:34.398 "reset": true, 00:15:34.398 "nvme_admin": false, 00:15:34.398 "nvme_io": false, 00:15:34.398 "nvme_io_md": false, 00:15:34.398 "write_zeroes": true, 00:15:34.398 "zcopy": true, 00:15:34.398 "get_zone_info": false, 00:15:34.398 "zone_management": false, 00:15:34.398 "zone_append": false, 00:15:34.398 "compare": false, 00:15:34.398 "compare_and_write": false, 00:15:34.398 "abort": true, 00:15:34.398 "seek_hole": false, 00:15:34.398 "seek_data": false, 00:15:34.398 "copy": true, 00:15:34.398 "nvme_iov_md": false 00:15:34.398 }, 00:15:34.398 "memory_domains": [ 00:15:34.398 { 00:15:34.398 "dma_device_id": "system", 00:15:34.398 "dma_device_type": 1 00:15:34.398 }, 00:15:34.398 { 00:15:34.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.398 "dma_device_type": 2 00:15:34.398 } 00:15:34.398 ], 00:15:34.398 "driver_specific": {} 00:15:34.398 } 00:15:34.398 ] 00:15:34.398 13:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:34.398 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:34.398 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:34.398 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:34.398 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:34.398 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:34.398 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:34.399 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:34.399 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:34.399 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.399 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.399 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.399 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.399 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.399 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.657 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.657 "name": "Existed_Raid", 00:15:34.657 "uuid": "bb40a97a-eabb-4f56-ad4a-37818c122f98", 00:15:34.657 "strip_size_kb": 64, 00:15:34.657 "state": "configuring", 00:15:34.657 "raid_level": "concat", 00:15:34.657 "superblock": true, 00:15:34.657 "num_base_bdevs": 3, 00:15:34.657 "num_base_bdevs_discovered": 2, 00:15:34.657 "num_base_bdevs_operational": 3, 00:15:34.657 "base_bdevs_list": [ 00:15:34.657 { 00:15:34.657 "name": "BaseBdev1", 00:15:34.657 "uuid": "397cbdc5-63d4-417b-bb90-d3099333630c", 00:15:34.657 "is_configured": true, 00:15:34.657 "data_offset": 2048, 00:15:34.657 "data_size": 63488 00:15:34.657 }, 00:15:34.657 { 00:15:34.657 "name": "BaseBdev2", 00:15:34.657 "uuid": "f03dcaa0-8f8a-4e26-a5f5-7062a76bb5df", 00:15:34.657 "is_configured": true, 00:15:34.657 "data_offset": 2048, 00:15:34.657 "data_size": 63488 00:15:34.658 }, 00:15:34.658 { 00:15:34.658 "name": "BaseBdev3", 00:15:34.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.658 "is_configured": false, 00:15:34.658 "data_offset": 0, 00:15:34.658 "data_size": 0 00:15:34.658 } 00:15:34.658 ] 00:15:34.658 }' 00:15:34.658 13:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.658 13:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.225 13:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.225 [2024-07-25 13:15:45.665320] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.225 [2024-07-25 13:15:45.665466] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1958710 00:15:35.225 [2024-07-25 13:15:45.665479] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:35.225 [2024-07-25 13:15:45.665639] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x194f1e0 00:15:35.225 [2024-07-25 13:15:45.665753] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1958710 00:15:35.225 [2024-07-25 13:15:45.665762] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1958710 00:15:35.225 [2024-07-25 13:15:45.665847] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.225 BaseBdev3 00:15:35.225 13:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:35.225 13:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:35.225 13:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:35.225 13:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:35.225 13:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:35.225 13:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:35.225 13:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:35.483 13:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.741 [ 00:15:35.741 { 00:15:35.741 "name": "BaseBdev3", 00:15:35.741 "aliases": [ 00:15:35.742 "b5a663d2-b4c2-45a6-b1f8-ae2e96d6f0f0" 00:15:35.742 ], 00:15:35.742 "product_name": "Malloc disk", 00:15:35.742 "block_size": 512, 00:15:35.742 "num_blocks": 65536, 00:15:35.742 "uuid": "b5a663d2-b4c2-45a6-b1f8-ae2e96d6f0f0", 00:15:35.742 "assigned_rate_limits": { 00:15:35.742 "rw_ios_per_sec": 0, 00:15:35.742 "rw_mbytes_per_sec": 0, 00:15:35.742 "r_mbytes_per_sec": 0, 00:15:35.742 "w_mbytes_per_sec": 0 00:15:35.742 }, 00:15:35.742 "claimed": true, 00:15:35.742 "claim_type": "exclusive_write", 00:15:35.742 "zoned": false, 00:15:35.742 "supported_io_types": { 00:15:35.742 "read": true, 00:15:35.742 "write": true, 00:15:35.742 "unmap": true, 00:15:35.742 "flush": true, 00:15:35.742 "reset": true, 00:15:35.742 "nvme_admin": false, 00:15:35.742 "nvme_io": false, 00:15:35.742 "nvme_io_md": false, 00:15:35.742 "write_zeroes": true, 00:15:35.742 "zcopy": true, 00:15:35.742 "get_zone_info": false, 00:15:35.742 "zone_management": false, 00:15:35.742 "zone_append": false, 00:15:35.742 "compare": false, 00:15:35.742 "compare_and_write": false, 00:15:35.742 "abort": true, 00:15:35.742 "seek_hole": false, 00:15:35.742 "seek_data": false, 00:15:35.742 "copy": true, 00:15:35.742 "nvme_iov_md": false 00:15:35.742 }, 00:15:35.742 "memory_domains": [ 00:15:35.742 { 00:15:35.742 "dma_device_id": "system", 00:15:35.742 "dma_device_type": 1 00:15:35.742 }, 00:15:35.742 { 00:15:35.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.742 "dma_device_type": 2 00:15:35.742 } 00:15:35.742 ], 00:15:35.742 "driver_specific": {} 00:15:35.742 } 00:15:35.742 ] 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.742 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.000 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.000 "name": "Existed_Raid", 00:15:36.000 "uuid": "bb40a97a-eabb-4f56-ad4a-37818c122f98", 00:15:36.000 "strip_size_kb": 64, 00:15:36.000 "state": "online", 00:15:36.000 "raid_level": "concat", 00:15:36.000 "superblock": true, 00:15:36.000 "num_base_bdevs": 3, 00:15:36.000 "num_base_bdevs_discovered": 3, 00:15:36.000 "num_base_bdevs_operational": 3, 00:15:36.000 "base_bdevs_list": [ 00:15:36.000 { 00:15:36.000 "name": "BaseBdev1", 00:15:36.000 "uuid": "397cbdc5-63d4-417b-bb90-d3099333630c", 00:15:36.000 "is_configured": true, 00:15:36.000 "data_offset": 2048, 00:15:36.000 "data_size": 63488 00:15:36.000 }, 00:15:36.000 { 00:15:36.000 "name": "BaseBdev2", 00:15:36.000 "uuid": "f03dcaa0-8f8a-4e26-a5f5-7062a76bb5df", 00:15:36.000 "is_configured": true, 00:15:36.001 "data_offset": 2048, 00:15:36.001 "data_size": 63488 00:15:36.001 }, 00:15:36.001 { 00:15:36.001 "name": "BaseBdev3", 00:15:36.001 "uuid": "b5a663d2-b4c2-45a6-b1f8-ae2e96d6f0f0", 00:15:36.001 "is_configured": true, 00:15:36.001 "data_offset": 2048, 00:15:36.001 "data_size": 63488 00:15:36.001 } 00:15:36.001 ] 00:15:36.001 }' 00:15:36.001 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.001 13:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.567 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.567 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:36.567 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:36.567 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:36.567 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:36.567 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:36.567 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:36.567 13:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:36.825 [2024-07-25 13:15:47.137465] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.825 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:36.825 "name": "Existed_Raid", 00:15:36.825 "aliases": [ 00:15:36.825 "bb40a97a-eabb-4f56-ad4a-37818c122f98" 00:15:36.825 ], 00:15:36.825 "product_name": "Raid Volume", 00:15:36.825 "block_size": 512, 00:15:36.825 "num_blocks": 190464, 00:15:36.825 "uuid": "bb40a97a-eabb-4f56-ad4a-37818c122f98", 00:15:36.825 "assigned_rate_limits": { 00:15:36.825 "rw_ios_per_sec": 0, 00:15:36.825 "rw_mbytes_per_sec": 0, 00:15:36.825 "r_mbytes_per_sec": 0, 00:15:36.825 "w_mbytes_per_sec": 0 00:15:36.825 }, 00:15:36.825 "claimed": false, 00:15:36.825 "zoned": false, 00:15:36.825 "supported_io_types": { 00:15:36.825 "read": true, 00:15:36.825 "write": true, 00:15:36.825 "unmap": true, 00:15:36.825 "flush": true, 00:15:36.825 "reset": true, 00:15:36.825 "nvme_admin": false, 00:15:36.825 "nvme_io": false, 00:15:36.825 "nvme_io_md": false, 00:15:36.825 "write_zeroes": true, 00:15:36.825 "zcopy": false, 00:15:36.825 "get_zone_info": false, 00:15:36.825 "zone_management": false, 00:15:36.825 "zone_append": false, 00:15:36.825 "compare": false, 00:15:36.825 "compare_and_write": false, 00:15:36.825 "abort": false, 00:15:36.825 "seek_hole": false, 00:15:36.825 "seek_data": false, 00:15:36.825 "copy": false, 00:15:36.825 "nvme_iov_md": false 00:15:36.825 }, 00:15:36.825 "memory_domains": [ 00:15:36.825 { 00:15:36.825 "dma_device_id": "system", 00:15:36.825 "dma_device_type": 1 00:15:36.825 }, 00:15:36.825 { 00:15:36.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.825 "dma_device_type": 2 00:15:36.825 }, 00:15:36.825 { 00:15:36.825 "dma_device_id": "system", 00:15:36.825 "dma_device_type": 1 00:15:36.825 }, 00:15:36.825 { 00:15:36.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.825 "dma_device_type": 2 00:15:36.825 }, 00:15:36.825 { 00:15:36.825 "dma_device_id": "system", 00:15:36.825 "dma_device_type": 1 00:15:36.825 }, 00:15:36.825 { 00:15:36.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.825 "dma_device_type": 2 00:15:36.825 } 00:15:36.825 ], 00:15:36.825 "driver_specific": { 00:15:36.825 "raid": { 00:15:36.825 "uuid": "bb40a97a-eabb-4f56-ad4a-37818c122f98", 00:15:36.825 "strip_size_kb": 64, 00:15:36.825 "state": "online", 00:15:36.825 "raid_level": "concat", 00:15:36.825 "superblock": true, 00:15:36.825 "num_base_bdevs": 3, 00:15:36.825 "num_base_bdevs_discovered": 3, 00:15:36.825 "num_base_bdevs_operational": 3, 00:15:36.825 "base_bdevs_list": [ 00:15:36.825 { 00:15:36.825 "name": "BaseBdev1", 00:15:36.825 "uuid": "397cbdc5-63d4-417b-bb90-d3099333630c", 00:15:36.825 "is_configured": true, 00:15:36.825 "data_offset": 2048, 00:15:36.825 "data_size": 63488 00:15:36.825 }, 00:15:36.825 { 00:15:36.825 "name": "BaseBdev2", 00:15:36.825 "uuid": "f03dcaa0-8f8a-4e26-a5f5-7062a76bb5df", 00:15:36.825 "is_configured": true, 00:15:36.825 "data_offset": 2048, 00:15:36.825 "data_size": 63488 00:15:36.825 }, 00:15:36.825 { 00:15:36.825 "name": "BaseBdev3", 00:15:36.825 "uuid": "b5a663d2-b4c2-45a6-b1f8-ae2e96d6f0f0", 00:15:36.825 "is_configured": true, 00:15:36.825 "data_offset": 2048, 00:15:36.825 "data_size": 63488 00:15:36.825 } 00:15:36.825 ] 00:15:36.825 } 00:15:36.825 } 00:15:36.825 }' 00:15:36.825 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.825 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:36.825 BaseBdev2 00:15:36.825 BaseBdev3' 00:15:36.825 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:36.825 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:36.825 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:37.084 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:37.084 "name": "BaseBdev1", 00:15:37.084 "aliases": [ 00:15:37.084 "397cbdc5-63d4-417b-bb90-d3099333630c" 00:15:37.084 ], 00:15:37.084 "product_name": "Malloc disk", 00:15:37.084 "block_size": 512, 00:15:37.084 "num_blocks": 65536, 00:15:37.084 "uuid": "397cbdc5-63d4-417b-bb90-d3099333630c", 00:15:37.084 "assigned_rate_limits": { 00:15:37.084 "rw_ios_per_sec": 0, 00:15:37.084 "rw_mbytes_per_sec": 0, 00:15:37.084 "r_mbytes_per_sec": 0, 00:15:37.084 "w_mbytes_per_sec": 0 00:15:37.084 }, 00:15:37.084 "claimed": true, 00:15:37.084 "claim_type": "exclusive_write", 00:15:37.084 "zoned": false, 00:15:37.084 "supported_io_types": { 00:15:37.084 "read": true, 00:15:37.084 "write": true, 00:15:37.084 "unmap": true, 00:15:37.084 "flush": true, 00:15:37.084 "reset": true, 00:15:37.084 "nvme_admin": false, 00:15:37.084 "nvme_io": false, 00:15:37.084 "nvme_io_md": false, 00:15:37.084 "write_zeroes": true, 00:15:37.084 "zcopy": true, 00:15:37.084 "get_zone_info": false, 00:15:37.084 "zone_management": false, 00:15:37.084 "zone_append": false, 00:15:37.084 "compare": false, 00:15:37.084 "compare_and_write": false, 00:15:37.084 "abort": true, 00:15:37.084 "seek_hole": false, 00:15:37.084 "seek_data": false, 00:15:37.084 "copy": true, 00:15:37.084 "nvme_iov_md": false 00:15:37.084 }, 00:15:37.084 "memory_domains": [ 00:15:37.084 { 00:15:37.084 "dma_device_id": "system", 00:15:37.084 "dma_device_type": 1 00:15:37.084 }, 00:15:37.084 { 00:15:37.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.084 "dma_device_type": 2 00:15:37.084 } 00:15:37.084 ], 00:15:37.084 "driver_specific": {} 00:15:37.084 }' 00:15:37.084 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.084 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.084 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:37.084 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.343 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.343 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:37.343 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.343 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.343 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:37.343 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.343 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.343 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:37.343 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:37.343 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:37.343 13:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:37.602 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:37.602 "name": "BaseBdev2", 00:15:37.602 "aliases": [ 00:15:37.602 "f03dcaa0-8f8a-4e26-a5f5-7062a76bb5df" 00:15:37.602 ], 00:15:37.602 "product_name": "Malloc disk", 00:15:37.602 "block_size": 512, 00:15:37.602 "num_blocks": 65536, 00:15:37.602 "uuid": "f03dcaa0-8f8a-4e26-a5f5-7062a76bb5df", 00:15:37.602 "assigned_rate_limits": { 00:15:37.602 "rw_ios_per_sec": 0, 00:15:37.602 "rw_mbytes_per_sec": 0, 00:15:37.602 "r_mbytes_per_sec": 0, 00:15:37.602 "w_mbytes_per_sec": 0 00:15:37.602 }, 00:15:37.602 "claimed": true, 00:15:37.602 "claim_type": "exclusive_write", 00:15:37.602 "zoned": false, 00:15:37.602 "supported_io_types": { 00:15:37.602 "read": true, 00:15:37.602 "write": true, 00:15:37.602 "unmap": true, 00:15:37.602 "flush": true, 00:15:37.602 "reset": true, 00:15:37.602 "nvme_admin": false, 00:15:37.602 "nvme_io": false, 00:15:37.602 "nvme_io_md": false, 00:15:37.602 "write_zeroes": true, 00:15:37.602 "zcopy": true, 00:15:37.602 "get_zone_info": false, 00:15:37.602 "zone_management": false, 00:15:37.602 "zone_append": false, 00:15:37.602 "compare": false, 00:15:37.602 "compare_and_write": false, 00:15:37.602 "abort": true, 00:15:37.602 "seek_hole": false, 00:15:37.602 "seek_data": false, 00:15:37.602 "copy": true, 00:15:37.602 "nvme_iov_md": false 00:15:37.602 }, 00:15:37.602 "memory_domains": [ 00:15:37.602 { 00:15:37.602 "dma_device_id": "system", 00:15:37.602 "dma_device_type": 1 00:15:37.602 }, 00:15:37.602 { 00:15:37.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.602 "dma_device_type": 2 00:15:37.602 } 00:15:37.602 ], 00:15:37.602 "driver_specific": {} 00:15:37.602 }' 00:15:37.602 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.602 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.861 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:37.861 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.861 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.861 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:37.861 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.861 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.861 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:37.861 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.861 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.120 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.120 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:38.120 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:38.120 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.120 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.120 "name": "BaseBdev3", 00:15:38.120 "aliases": [ 00:15:38.120 "b5a663d2-b4c2-45a6-b1f8-ae2e96d6f0f0" 00:15:38.120 ], 00:15:38.120 "product_name": "Malloc disk", 00:15:38.120 "block_size": 512, 00:15:38.120 "num_blocks": 65536, 00:15:38.120 "uuid": "b5a663d2-b4c2-45a6-b1f8-ae2e96d6f0f0", 00:15:38.120 "assigned_rate_limits": { 00:15:38.120 "rw_ios_per_sec": 0, 00:15:38.120 "rw_mbytes_per_sec": 0, 00:15:38.120 "r_mbytes_per_sec": 0, 00:15:38.120 "w_mbytes_per_sec": 0 00:15:38.120 }, 00:15:38.120 "claimed": true, 00:15:38.120 "claim_type": "exclusive_write", 00:15:38.120 "zoned": false, 00:15:38.120 "supported_io_types": { 00:15:38.120 "read": true, 00:15:38.120 "write": true, 00:15:38.120 "unmap": true, 00:15:38.120 "flush": true, 00:15:38.120 "reset": true, 00:15:38.120 "nvme_admin": false, 00:15:38.120 "nvme_io": false, 00:15:38.120 "nvme_io_md": false, 00:15:38.120 "write_zeroes": true, 00:15:38.120 "zcopy": true, 00:15:38.120 "get_zone_info": false, 00:15:38.120 "zone_management": false, 00:15:38.120 "zone_append": false, 00:15:38.120 "compare": false, 00:15:38.120 "compare_and_write": false, 00:15:38.120 "abort": true, 00:15:38.120 "seek_hole": false, 00:15:38.120 "seek_data": false, 00:15:38.120 "copy": true, 00:15:38.120 "nvme_iov_md": false 00:15:38.120 }, 00:15:38.120 "memory_domains": [ 00:15:38.120 { 00:15:38.120 "dma_device_id": "system", 00:15:38.120 "dma_device_type": 1 00:15:38.120 }, 00:15:38.120 { 00:15:38.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.120 "dma_device_type": 2 00:15:38.120 } 00:15:38.120 ], 00:15:38.120 "driver_specific": {} 00:15:38.120 }' 00:15:38.120 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.379 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.379 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.379 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.379 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.379 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:38.379 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.379 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.379 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:38.379 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.637 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.637 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.637 13:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:38.896 [2024-07-25 13:15:49.130535] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.897 [2024-07-25 13:15:49.130557] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.897 [2024-07-25 13:15:49.130594] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.897 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.155 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.155 "name": "Existed_Raid", 00:15:39.155 "uuid": "bb40a97a-eabb-4f56-ad4a-37818c122f98", 00:15:39.155 "strip_size_kb": 64, 00:15:39.155 "state": "offline", 00:15:39.155 "raid_level": "concat", 00:15:39.155 "superblock": true, 00:15:39.155 "num_base_bdevs": 3, 00:15:39.155 "num_base_bdevs_discovered": 2, 00:15:39.155 "num_base_bdevs_operational": 2, 00:15:39.155 "base_bdevs_list": [ 00:15:39.155 { 00:15:39.155 "name": null, 00:15:39.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.155 "is_configured": false, 00:15:39.155 "data_offset": 2048, 00:15:39.155 "data_size": 63488 00:15:39.155 }, 00:15:39.155 { 00:15:39.155 "name": "BaseBdev2", 00:15:39.155 "uuid": "f03dcaa0-8f8a-4e26-a5f5-7062a76bb5df", 00:15:39.155 "is_configured": true, 00:15:39.155 "data_offset": 2048, 00:15:39.155 "data_size": 63488 00:15:39.155 }, 00:15:39.155 { 00:15:39.156 "name": "BaseBdev3", 00:15:39.156 "uuid": "b5a663d2-b4c2-45a6-b1f8-ae2e96d6f0f0", 00:15:39.156 "is_configured": true, 00:15:39.156 "data_offset": 2048, 00:15:39.156 "data_size": 63488 00:15:39.156 } 00:15:39.156 ] 00:15:39.156 }' 00:15:39.156 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.156 13:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.723 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:39.723 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:39.723 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.723 13:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:39.723 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:39.723 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:39.723 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:39.982 [2024-07-25 13:15:50.382319] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:39.982 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:39.982 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:39.982 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.982 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:40.240 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:40.240 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.240 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:40.497 [2024-07-25 13:15:50.839805] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:40.498 [2024-07-25 13:15:50.839838] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1958710 name Existed_Raid, state offline 00:15:40.498 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:40.498 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:40.498 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.498 13:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:40.756 13:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:40.756 13:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:40.756 13:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:15:40.756 13:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:40.756 13:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:40.756 13:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.015 BaseBdev2 00:15:41.015 13:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:41.015 13:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:41.015 13:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:41.015 13:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:41.015 13:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:41.015 13:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:41.015 13:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.273 13:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:41.533 [ 00:15:41.533 { 00:15:41.533 "name": "BaseBdev2", 00:15:41.533 "aliases": [ 00:15:41.533 "7eda5b02-9927-48f0-b935-4e0754f07e90" 00:15:41.533 ], 00:15:41.533 "product_name": "Malloc disk", 00:15:41.533 "block_size": 512, 00:15:41.533 "num_blocks": 65536, 00:15:41.533 "uuid": "7eda5b02-9927-48f0-b935-4e0754f07e90", 00:15:41.533 "assigned_rate_limits": { 00:15:41.533 "rw_ios_per_sec": 0, 00:15:41.533 "rw_mbytes_per_sec": 0, 00:15:41.533 "r_mbytes_per_sec": 0, 00:15:41.533 "w_mbytes_per_sec": 0 00:15:41.533 }, 00:15:41.533 "claimed": false, 00:15:41.533 "zoned": false, 00:15:41.533 "supported_io_types": { 00:15:41.533 "read": true, 00:15:41.533 "write": true, 00:15:41.533 "unmap": true, 00:15:41.533 "flush": true, 00:15:41.533 "reset": true, 00:15:41.533 "nvme_admin": false, 00:15:41.533 "nvme_io": false, 00:15:41.533 "nvme_io_md": false, 00:15:41.533 "write_zeroes": true, 00:15:41.533 "zcopy": true, 00:15:41.533 "get_zone_info": false, 00:15:41.533 "zone_management": false, 00:15:41.533 "zone_append": false, 00:15:41.533 "compare": false, 00:15:41.533 "compare_and_write": false, 00:15:41.533 "abort": true, 00:15:41.533 "seek_hole": false, 00:15:41.533 "seek_data": false, 00:15:41.533 "copy": true, 00:15:41.533 "nvme_iov_md": false 00:15:41.533 }, 00:15:41.533 "memory_domains": [ 00:15:41.533 { 00:15:41.533 "dma_device_id": "system", 00:15:41.533 "dma_device_type": 1 00:15:41.533 }, 00:15:41.533 { 00:15:41.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.533 "dma_device_type": 2 00:15:41.533 } 00:15:41.533 ], 00:15:41.533 "driver_specific": {} 00:15:41.533 } 00:15:41.533 ] 00:15:41.533 13:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:41.533 13:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:41.533 13:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:41.533 13:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:41.533 BaseBdev3 00:15:41.533 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:41.533 13:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:41.533 13:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:41.533 13:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:41.533 13:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:41.533 13:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:41.533 13:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.792 13:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:42.051 [ 00:15:42.051 { 00:15:42.051 "name": "BaseBdev3", 00:15:42.051 "aliases": [ 00:15:42.051 "89acacd9-2527-4f08-8e8a-0b122b93e18b" 00:15:42.051 ], 00:15:42.051 "product_name": "Malloc disk", 00:15:42.051 "block_size": 512, 00:15:42.051 "num_blocks": 65536, 00:15:42.051 "uuid": "89acacd9-2527-4f08-8e8a-0b122b93e18b", 00:15:42.051 "assigned_rate_limits": { 00:15:42.051 "rw_ios_per_sec": 0, 00:15:42.051 "rw_mbytes_per_sec": 0, 00:15:42.051 "r_mbytes_per_sec": 0, 00:15:42.051 "w_mbytes_per_sec": 0 00:15:42.051 }, 00:15:42.051 "claimed": false, 00:15:42.051 "zoned": false, 00:15:42.051 "supported_io_types": { 00:15:42.051 "read": true, 00:15:42.051 "write": true, 00:15:42.051 "unmap": true, 00:15:42.051 "flush": true, 00:15:42.051 "reset": true, 00:15:42.051 "nvme_admin": false, 00:15:42.051 "nvme_io": false, 00:15:42.051 "nvme_io_md": false, 00:15:42.051 "write_zeroes": true, 00:15:42.051 "zcopy": true, 00:15:42.051 "get_zone_info": false, 00:15:42.051 "zone_management": false, 00:15:42.051 "zone_append": false, 00:15:42.051 "compare": false, 00:15:42.051 "compare_and_write": false, 00:15:42.051 "abort": true, 00:15:42.051 "seek_hole": false, 00:15:42.051 "seek_data": false, 00:15:42.051 "copy": true, 00:15:42.051 "nvme_iov_md": false 00:15:42.051 }, 00:15:42.051 "memory_domains": [ 00:15:42.051 { 00:15:42.051 "dma_device_id": "system", 00:15:42.051 "dma_device_type": 1 00:15:42.051 }, 00:15:42.051 { 00:15:42.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.051 "dma_device_type": 2 00:15:42.051 } 00:15:42.051 ], 00:15:42.051 "driver_specific": {} 00:15:42.051 } 00:15:42.051 ] 00:15:42.051 13:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:42.051 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:42.051 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:42.051 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:42.310 [2024-07-25 13:15:52.670235] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.310 [2024-07-25 13:15:52.670270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.310 [2024-07-25 13:15:52.670287] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.310 [2024-07-25 13:15:52.671556] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:42.310 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:42.310 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:42.310 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:42.310 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:42.310 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:42.310 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:42.310 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:42.310 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:42.310 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:42.310 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:42.310 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.310 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.570 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:42.570 "name": "Existed_Raid", 00:15:42.570 "uuid": "4825dad5-c60d-4d7c-81c2-a65ef09be47f", 00:15:42.570 "strip_size_kb": 64, 00:15:42.570 "state": "configuring", 00:15:42.570 "raid_level": "concat", 00:15:42.570 "superblock": true, 00:15:42.570 "num_base_bdevs": 3, 00:15:42.570 "num_base_bdevs_discovered": 2, 00:15:42.570 "num_base_bdevs_operational": 3, 00:15:42.570 "base_bdevs_list": [ 00:15:42.570 { 00:15:42.570 "name": "BaseBdev1", 00:15:42.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.570 "is_configured": false, 00:15:42.570 "data_offset": 0, 00:15:42.570 "data_size": 0 00:15:42.570 }, 00:15:42.570 { 00:15:42.570 "name": "BaseBdev2", 00:15:42.570 "uuid": "7eda5b02-9927-48f0-b935-4e0754f07e90", 00:15:42.570 "is_configured": true, 00:15:42.570 "data_offset": 2048, 00:15:42.570 "data_size": 63488 00:15:42.570 }, 00:15:42.570 { 00:15:42.570 "name": "BaseBdev3", 00:15:42.570 "uuid": "89acacd9-2527-4f08-8e8a-0b122b93e18b", 00:15:42.570 "is_configured": true, 00:15:42.570 "data_offset": 2048, 00:15:42.570 "data_size": 63488 00:15:42.570 } 00:15:42.570 ] 00:15:42.570 }' 00:15:42.570 13:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:42.570 13:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.138 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:43.397 [2024-07-25 13:15:53.692893] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.397 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:43.397 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:43.397 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.397 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:43.397 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:43.397 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:43.397 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.397 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.397 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.397 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.397 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.397 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.656 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.656 "name": "Existed_Raid", 00:15:43.656 "uuid": "4825dad5-c60d-4d7c-81c2-a65ef09be47f", 00:15:43.656 "strip_size_kb": 64, 00:15:43.656 "state": "configuring", 00:15:43.656 "raid_level": "concat", 00:15:43.656 "superblock": true, 00:15:43.656 "num_base_bdevs": 3, 00:15:43.656 "num_base_bdevs_discovered": 1, 00:15:43.656 "num_base_bdevs_operational": 3, 00:15:43.656 "base_bdevs_list": [ 00:15:43.656 { 00:15:43.656 "name": "BaseBdev1", 00:15:43.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.656 "is_configured": false, 00:15:43.656 "data_offset": 0, 00:15:43.656 "data_size": 0 00:15:43.656 }, 00:15:43.656 { 00:15:43.656 "name": null, 00:15:43.656 "uuid": "7eda5b02-9927-48f0-b935-4e0754f07e90", 00:15:43.656 "is_configured": false, 00:15:43.656 "data_offset": 2048, 00:15:43.656 "data_size": 63488 00:15:43.656 }, 00:15:43.656 { 00:15:43.656 "name": "BaseBdev3", 00:15:43.656 "uuid": "89acacd9-2527-4f08-8e8a-0b122b93e18b", 00:15:43.656 "is_configured": true, 00:15:43.656 "data_offset": 2048, 00:15:43.656 "data_size": 63488 00:15:43.656 } 00:15:43.656 ] 00:15:43.656 }' 00:15:43.656 13:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.656 13:15:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.223 13:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.223 13:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:44.482 13:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:44.482 13:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:44.482 [2024-07-25 13:15:54.944343] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.482 BaseBdev1 00:15:44.482 13:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:44.482 13:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:44.482 13:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:44.482 13:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:44.482 13:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:44.482 13:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:44.482 13:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.740 13:15:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.023 [ 00:15:45.023 { 00:15:45.023 "name": "BaseBdev1", 00:15:45.023 "aliases": [ 00:15:45.023 "9c31daa6-31b2-47af-9b23-2d4292b6dd8c" 00:15:45.023 ], 00:15:45.023 "product_name": "Malloc disk", 00:15:45.023 "block_size": 512, 00:15:45.023 "num_blocks": 65536, 00:15:45.023 "uuid": "9c31daa6-31b2-47af-9b23-2d4292b6dd8c", 00:15:45.023 "assigned_rate_limits": { 00:15:45.023 "rw_ios_per_sec": 0, 00:15:45.023 "rw_mbytes_per_sec": 0, 00:15:45.023 "r_mbytes_per_sec": 0, 00:15:45.023 "w_mbytes_per_sec": 0 00:15:45.023 }, 00:15:45.023 "claimed": true, 00:15:45.023 "claim_type": "exclusive_write", 00:15:45.023 "zoned": false, 00:15:45.023 "supported_io_types": { 00:15:45.023 "read": true, 00:15:45.023 "write": true, 00:15:45.023 "unmap": true, 00:15:45.023 "flush": true, 00:15:45.023 "reset": true, 00:15:45.023 "nvme_admin": false, 00:15:45.023 "nvme_io": false, 00:15:45.023 "nvme_io_md": false, 00:15:45.023 "write_zeroes": true, 00:15:45.023 "zcopy": true, 00:15:45.023 "get_zone_info": false, 00:15:45.023 "zone_management": false, 00:15:45.023 "zone_append": false, 00:15:45.023 "compare": false, 00:15:45.023 "compare_and_write": false, 00:15:45.023 "abort": true, 00:15:45.023 "seek_hole": false, 00:15:45.023 "seek_data": false, 00:15:45.023 "copy": true, 00:15:45.023 "nvme_iov_md": false 00:15:45.023 }, 00:15:45.023 "memory_domains": [ 00:15:45.023 { 00:15:45.023 "dma_device_id": "system", 00:15:45.023 "dma_device_type": 1 00:15:45.023 }, 00:15:45.023 { 00:15:45.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.023 "dma_device_type": 2 00:15:45.023 } 00:15:45.023 ], 00:15:45.023 "driver_specific": {} 00:15:45.023 } 00:15:45.023 ] 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.024 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.299 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.299 "name": "Existed_Raid", 00:15:45.299 "uuid": "4825dad5-c60d-4d7c-81c2-a65ef09be47f", 00:15:45.299 "strip_size_kb": 64, 00:15:45.299 "state": "configuring", 00:15:45.299 "raid_level": "concat", 00:15:45.299 "superblock": true, 00:15:45.299 "num_base_bdevs": 3, 00:15:45.299 "num_base_bdevs_discovered": 2, 00:15:45.299 "num_base_bdevs_operational": 3, 00:15:45.299 "base_bdevs_list": [ 00:15:45.299 { 00:15:45.299 "name": "BaseBdev1", 00:15:45.299 "uuid": "9c31daa6-31b2-47af-9b23-2d4292b6dd8c", 00:15:45.299 "is_configured": true, 00:15:45.299 "data_offset": 2048, 00:15:45.299 "data_size": 63488 00:15:45.299 }, 00:15:45.299 { 00:15:45.299 "name": null, 00:15:45.299 "uuid": "7eda5b02-9927-48f0-b935-4e0754f07e90", 00:15:45.299 "is_configured": false, 00:15:45.299 "data_offset": 2048, 00:15:45.299 "data_size": 63488 00:15:45.299 }, 00:15:45.299 { 00:15:45.299 "name": "BaseBdev3", 00:15:45.299 "uuid": "89acacd9-2527-4f08-8e8a-0b122b93e18b", 00:15:45.299 "is_configured": true, 00:15:45.299 "data_offset": 2048, 00:15:45.299 "data_size": 63488 00:15:45.299 } 00:15:45.299 ] 00:15:45.299 }' 00:15:45.299 13:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.299 13:15:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.867 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.867 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:46.125 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:46.125 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:46.385 [2024-07-25 13:15:56.668914] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:46.385 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:46.385 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.385 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:46.385 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:46.385 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:46.385 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:46.385 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.385 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.385 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.385 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.385 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.385 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.644 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.644 "name": "Existed_Raid", 00:15:46.644 "uuid": "4825dad5-c60d-4d7c-81c2-a65ef09be47f", 00:15:46.644 "strip_size_kb": 64, 00:15:46.644 "state": "configuring", 00:15:46.644 "raid_level": "concat", 00:15:46.644 "superblock": true, 00:15:46.644 "num_base_bdevs": 3, 00:15:46.644 "num_base_bdevs_discovered": 1, 00:15:46.644 "num_base_bdevs_operational": 3, 00:15:46.644 "base_bdevs_list": [ 00:15:46.644 { 00:15:46.644 "name": "BaseBdev1", 00:15:46.644 "uuid": "9c31daa6-31b2-47af-9b23-2d4292b6dd8c", 00:15:46.644 "is_configured": true, 00:15:46.644 "data_offset": 2048, 00:15:46.644 "data_size": 63488 00:15:46.644 }, 00:15:46.644 { 00:15:46.644 "name": null, 00:15:46.644 "uuid": "7eda5b02-9927-48f0-b935-4e0754f07e90", 00:15:46.644 "is_configured": false, 00:15:46.644 "data_offset": 2048, 00:15:46.644 "data_size": 63488 00:15:46.644 }, 00:15:46.644 { 00:15:46.644 "name": null, 00:15:46.644 "uuid": "89acacd9-2527-4f08-8e8a-0b122b93e18b", 00:15:46.644 "is_configured": false, 00:15:46.644 "data_offset": 2048, 00:15:46.644 "data_size": 63488 00:15:46.644 } 00:15:46.644 ] 00:15:46.644 }' 00:15:46.644 13:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.644 13:15:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.213 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.213 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:47.472 [2024-07-25 13:15:57.912200] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.472 13:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.731 13:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.731 "name": "Existed_Raid", 00:15:47.731 "uuid": "4825dad5-c60d-4d7c-81c2-a65ef09be47f", 00:15:47.731 "strip_size_kb": 64, 00:15:47.731 "state": "configuring", 00:15:47.731 "raid_level": "concat", 00:15:47.731 "superblock": true, 00:15:47.731 "num_base_bdevs": 3, 00:15:47.731 "num_base_bdevs_discovered": 2, 00:15:47.731 "num_base_bdevs_operational": 3, 00:15:47.731 "base_bdevs_list": [ 00:15:47.731 { 00:15:47.731 "name": "BaseBdev1", 00:15:47.731 "uuid": "9c31daa6-31b2-47af-9b23-2d4292b6dd8c", 00:15:47.731 "is_configured": true, 00:15:47.731 "data_offset": 2048, 00:15:47.731 "data_size": 63488 00:15:47.731 }, 00:15:47.731 { 00:15:47.731 "name": null, 00:15:47.731 "uuid": "7eda5b02-9927-48f0-b935-4e0754f07e90", 00:15:47.731 "is_configured": false, 00:15:47.731 "data_offset": 2048, 00:15:47.731 "data_size": 63488 00:15:47.731 }, 00:15:47.731 { 00:15:47.731 "name": "BaseBdev3", 00:15:47.731 "uuid": "89acacd9-2527-4f08-8e8a-0b122b93e18b", 00:15:47.731 "is_configured": true, 00:15:47.731 "data_offset": 2048, 00:15:47.731 "data_size": 63488 00:15:47.731 } 00:15:47.731 ] 00:15:47.731 }' 00:15:47.731 13:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.731 13:15:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.300 13:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.300 13:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:48.559 13:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:48.559 13:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:48.817 [2024-07-25 13:15:59.087306] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.817 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.817 "name": "Existed_Raid", 00:15:48.817 "uuid": "4825dad5-c60d-4d7c-81c2-a65ef09be47f", 00:15:48.817 "strip_size_kb": 64, 00:15:48.817 "state": "configuring", 00:15:48.817 "raid_level": "concat", 00:15:48.817 "superblock": true, 00:15:48.817 "num_base_bdevs": 3, 00:15:48.817 "num_base_bdevs_discovered": 1, 00:15:48.817 "num_base_bdevs_operational": 3, 00:15:48.817 "base_bdevs_list": [ 00:15:48.817 { 00:15:48.817 "name": null, 00:15:48.817 "uuid": "9c31daa6-31b2-47af-9b23-2d4292b6dd8c", 00:15:48.817 "is_configured": false, 00:15:48.817 "data_offset": 2048, 00:15:48.817 "data_size": 63488 00:15:48.817 }, 00:15:48.818 { 00:15:48.818 "name": null, 00:15:48.818 "uuid": "7eda5b02-9927-48f0-b935-4e0754f07e90", 00:15:48.818 "is_configured": false, 00:15:48.818 "data_offset": 2048, 00:15:48.818 "data_size": 63488 00:15:48.818 }, 00:15:48.818 { 00:15:48.818 "name": "BaseBdev3", 00:15:48.818 "uuid": "89acacd9-2527-4f08-8e8a-0b122b93e18b", 00:15:48.818 "is_configured": true, 00:15:48.818 "data_offset": 2048, 00:15:48.818 "data_size": 63488 00:15:48.818 } 00:15:48.818 ] 00:15:48.818 }' 00:15:48.818 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.818 13:15:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.385 13:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:49.645 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:49.645 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:49.904 [2024-07-25 13:16:00.238645] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.904 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:49.904 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:49.904 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:49.904 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:49.904 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:49.904 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:49.904 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.904 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.904 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.904 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:49.904 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.904 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.164 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.164 "name": "Existed_Raid", 00:15:50.164 "uuid": "4825dad5-c60d-4d7c-81c2-a65ef09be47f", 00:15:50.164 "strip_size_kb": 64, 00:15:50.164 "state": "configuring", 00:15:50.164 "raid_level": "concat", 00:15:50.164 "superblock": true, 00:15:50.164 "num_base_bdevs": 3, 00:15:50.164 "num_base_bdevs_discovered": 2, 00:15:50.164 "num_base_bdevs_operational": 3, 00:15:50.164 "base_bdevs_list": [ 00:15:50.164 { 00:15:50.164 "name": null, 00:15:50.164 "uuid": "9c31daa6-31b2-47af-9b23-2d4292b6dd8c", 00:15:50.164 "is_configured": false, 00:15:50.164 "data_offset": 2048, 00:15:50.164 "data_size": 63488 00:15:50.164 }, 00:15:50.164 { 00:15:50.164 "name": "BaseBdev2", 00:15:50.164 "uuid": "7eda5b02-9927-48f0-b935-4e0754f07e90", 00:15:50.164 "is_configured": true, 00:15:50.164 "data_offset": 2048, 00:15:50.164 "data_size": 63488 00:15:50.164 }, 00:15:50.164 { 00:15:50.164 "name": "BaseBdev3", 00:15:50.164 "uuid": "89acacd9-2527-4f08-8e8a-0b122b93e18b", 00:15:50.164 "is_configured": true, 00:15:50.164 "data_offset": 2048, 00:15:50.164 "data_size": 63488 00:15:50.164 } 00:15:50.164 ] 00:15:50.164 }' 00:15:50.164 13:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.164 13:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.732 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.732 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:50.992 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:50.992 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:50.992 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.992 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9c31daa6-31b2-47af-9b23-2d4292b6dd8c 00:15:51.251 [2024-07-25 13:16:01.581890] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:51.251 [2024-07-25 13:16:01.582025] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b0bbe0 00:15:51.251 [2024-07-25 13:16:01.582037] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:51.251 [2024-07-25 13:16:01.582213] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b0aad0 00:15:51.251 [2024-07-25 13:16:01.582321] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b0bbe0 00:15:51.251 [2024-07-25 13:16:01.582330] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1b0bbe0 00:15:51.251 [2024-07-25 13:16:01.582413] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.251 NewBaseBdev 00:15:51.251 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:51.251 13:16:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:51.251 13:16:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:51.251 13:16:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:51.251 13:16:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:51.251 13:16:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:51.251 13:16:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:51.510 [ 00:15:51.510 { 00:15:51.510 "name": "NewBaseBdev", 00:15:51.510 "aliases": [ 00:15:51.510 "9c31daa6-31b2-47af-9b23-2d4292b6dd8c" 00:15:51.510 ], 00:15:51.510 "product_name": "Malloc disk", 00:15:51.510 "block_size": 512, 00:15:51.510 "num_blocks": 65536, 00:15:51.510 "uuid": "9c31daa6-31b2-47af-9b23-2d4292b6dd8c", 00:15:51.510 "assigned_rate_limits": { 00:15:51.510 "rw_ios_per_sec": 0, 00:15:51.510 "rw_mbytes_per_sec": 0, 00:15:51.510 "r_mbytes_per_sec": 0, 00:15:51.510 "w_mbytes_per_sec": 0 00:15:51.510 }, 00:15:51.510 "claimed": true, 00:15:51.510 "claim_type": "exclusive_write", 00:15:51.510 "zoned": false, 00:15:51.510 "supported_io_types": { 00:15:51.510 "read": true, 00:15:51.510 "write": true, 00:15:51.510 "unmap": true, 00:15:51.510 "flush": true, 00:15:51.510 "reset": true, 00:15:51.510 "nvme_admin": false, 00:15:51.510 "nvme_io": false, 00:15:51.510 "nvme_io_md": false, 00:15:51.510 "write_zeroes": true, 00:15:51.510 "zcopy": true, 00:15:51.510 "get_zone_info": false, 00:15:51.510 "zone_management": false, 00:15:51.510 "zone_append": false, 00:15:51.510 "compare": false, 00:15:51.510 "compare_and_write": false, 00:15:51.510 "abort": true, 00:15:51.510 "seek_hole": false, 00:15:51.510 "seek_data": false, 00:15:51.510 "copy": true, 00:15:51.510 "nvme_iov_md": false 00:15:51.510 }, 00:15:51.510 "memory_domains": [ 00:15:51.510 { 00:15:51.510 "dma_device_id": "system", 00:15:51.510 "dma_device_type": 1 00:15:51.510 }, 00:15:51.510 { 00:15:51.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.510 "dma_device_type": 2 00:15:51.510 } 00:15:51.510 ], 00:15:51.510 "driver_specific": {} 00:15:51.510 } 00:15:51.510 ] 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.510 13:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.769 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.769 "name": "Existed_Raid", 00:15:51.769 "uuid": "4825dad5-c60d-4d7c-81c2-a65ef09be47f", 00:15:51.769 "strip_size_kb": 64, 00:15:51.769 "state": "online", 00:15:51.769 "raid_level": "concat", 00:15:51.769 "superblock": true, 00:15:51.769 "num_base_bdevs": 3, 00:15:51.769 "num_base_bdevs_discovered": 3, 00:15:51.769 "num_base_bdevs_operational": 3, 00:15:51.769 "base_bdevs_list": [ 00:15:51.769 { 00:15:51.769 "name": "NewBaseBdev", 00:15:51.769 "uuid": "9c31daa6-31b2-47af-9b23-2d4292b6dd8c", 00:15:51.769 "is_configured": true, 00:15:51.769 "data_offset": 2048, 00:15:51.769 "data_size": 63488 00:15:51.769 }, 00:15:51.769 { 00:15:51.769 "name": "BaseBdev2", 00:15:51.769 "uuid": "7eda5b02-9927-48f0-b935-4e0754f07e90", 00:15:51.769 "is_configured": true, 00:15:51.769 "data_offset": 2048, 00:15:51.769 "data_size": 63488 00:15:51.769 }, 00:15:51.769 { 00:15:51.769 "name": "BaseBdev3", 00:15:51.769 "uuid": "89acacd9-2527-4f08-8e8a-0b122b93e18b", 00:15:51.769 "is_configured": true, 00:15:51.769 "data_offset": 2048, 00:15:51.769 "data_size": 63488 00:15:51.769 } 00:15:51.769 ] 00:15:51.769 }' 00:15:51.769 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.769 13:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.336 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:52.336 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:52.336 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:52.336 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:52.336 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:52.336 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:52.336 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:52.336 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:52.336 [2024-07-25 13:16:02.765263] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.336 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:52.336 "name": "Existed_Raid", 00:15:52.336 "aliases": [ 00:15:52.336 "4825dad5-c60d-4d7c-81c2-a65ef09be47f" 00:15:52.336 ], 00:15:52.336 "product_name": "Raid Volume", 00:15:52.336 "block_size": 512, 00:15:52.336 "num_blocks": 190464, 00:15:52.336 "uuid": "4825dad5-c60d-4d7c-81c2-a65ef09be47f", 00:15:52.336 "assigned_rate_limits": { 00:15:52.336 "rw_ios_per_sec": 0, 00:15:52.336 "rw_mbytes_per_sec": 0, 00:15:52.336 "r_mbytes_per_sec": 0, 00:15:52.336 "w_mbytes_per_sec": 0 00:15:52.336 }, 00:15:52.336 "claimed": false, 00:15:52.336 "zoned": false, 00:15:52.336 "supported_io_types": { 00:15:52.336 "read": true, 00:15:52.336 "write": true, 00:15:52.336 "unmap": true, 00:15:52.336 "flush": true, 00:15:52.336 "reset": true, 00:15:52.336 "nvme_admin": false, 00:15:52.336 "nvme_io": false, 00:15:52.336 "nvme_io_md": false, 00:15:52.336 "write_zeroes": true, 00:15:52.336 "zcopy": false, 00:15:52.336 "get_zone_info": false, 00:15:52.336 "zone_management": false, 00:15:52.336 "zone_append": false, 00:15:52.336 "compare": false, 00:15:52.336 "compare_and_write": false, 00:15:52.336 "abort": false, 00:15:52.336 "seek_hole": false, 00:15:52.336 "seek_data": false, 00:15:52.336 "copy": false, 00:15:52.336 "nvme_iov_md": false 00:15:52.336 }, 00:15:52.336 "memory_domains": [ 00:15:52.336 { 00:15:52.336 "dma_device_id": "system", 00:15:52.337 "dma_device_type": 1 00:15:52.337 }, 00:15:52.337 { 00:15:52.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.337 "dma_device_type": 2 00:15:52.337 }, 00:15:52.337 { 00:15:52.337 "dma_device_id": "system", 00:15:52.337 "dma_device_type": 1 00:15:52.337 }, 00:15:52.337 { 00:15:52.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.337 "dma_device_type": 2 00:15:52.337 }, 00:15:52.337 { 00:15:52.337 "dma_device_id": "system", 00:15:52.337 "dma_device_type": 1 00:15:52.337 }, 00:15:52.337 { 00:15:52.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.337 "dma_device_type": 2 00:15:52.337 } 00:15:52.337 ], 00:15:52.337 "driver_specific": { 00:15:52.337 "raid": { 00:15:52.337 "uuid": "4825dad5-c60d-4d7c-81c2-a65ef09be47f", 00:15:52.337 "strip_size_kb": 64, 00:15:52.337 "state": "online", 00:15:52.337 "raid_level": "concat", 00:15:52.337 "superblock": true, 00:15:52.337 "num_base_bdevs": 3, 00:15:52.337 "num_base_bdevs_discovered": 3, 00:15:52.337 "num_base_bdevs_operational": 3, 00:15:52.337 "base_bdevs_list": [ 00:15:52.337 { 00:15:52.337 "name": "NewBaseBdev", 00:15:52.337 "uuid": "9c31daa6-31b2-47af-9b23-2d4292b6dd8c", 00:15:52.337 "is_configured": true, 00:15:52.337 "data_offset": 2048, 00:15:52.337 "data_size": 63488 00:15:52.337 }, 00:15:52.337 { 00:15:52.337 "name": "BaseBdev2", 00:15:52.337 "uuid": "7eda5b02-9927-48f0-b935-4e0754f07e90", 00:15:52.337 "is_configured": true, 00:15:52.337 "data_offset": 2048, 00:15:52.337 "data_size": 63488 00:15:52.337 }, 00:15:52.337 { 00:15:52.337 "name": "BaseBdev3", 00:15:52.337 "uuid": "89acacd9-2527-4f08-8e8a-0b122b93e18b", 00:15:52.337 "is_configured": true, 00:15:52.337 "data_offset": 2048, 00:15:52.337 "data_size": 63488 00:15:52.337 } 00:15:52.337 ] 00:15:52.337 } 00:15:52.337 } 00:15:52.337 }' 00:15:52.337 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.597 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:52.597 BaseBdev2 00:15:52.597 BaseBdev3' 00:15:52.597 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:52.597 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:52.597 13:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:52.597 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:52.597 "name": "NewBaseBdev", 00:15:52.597 "aliases": [ 00:15:52.597 "9c31daa6-31b2-47af-9b23-2d4292b6dd8c" 00:15:52.597 ], 00:15:52.597 "product_name": "Malloc disk", 00:15:52.597 "block_size": 512, 00:15:52.597 "num_blocks": 65536, 00:15:52.597 "uuid": "9c31daa6-31b2-47af-9b23-2d4292b6dd8c", 00:15:52.597 "assigned_rate_limits": { 00:15:52.597 "rw_ios_per_sec": 0, 00:15:52.597 "rw_mbytes_per_sec": 0, 00:15:52.597 "r_mbytes_per_sec": 0, 00:15:52.597 "w_mbytes_per_sec": 0 00:15:52.597 }, 00:15:52.597 "claimed": true, 00:15:52.597 "claim_type": "exclusive_write", 00:15:52.597 "zoned": false, 00:15:52.597 "supported_io_types": { 00:15:52.597 "read": true, 00:15:52.597 "write": true, 00:15:52.597 "unmap": true, 00:15:52.597 "flush": true, 00:15:52.597 "reset": true, 00:15:52.597 "nvme_admin": false, 00:15:52.597 "nvme_io": false, 00:15:52.597 "nvme_io_md": false, 00:15:52.597 "write_zeroes": true, 00:15:52.597 "zcopy": true, 00:15:52.597 "get_zone_info": false, 00:15:52.597 "zone_management": false, 00:15:52.597 "zone_append": false, 00:15:52.597 "compare": false, 00:15:52.597 "compare_and_write": false, 00:15:52.597 "abort": true, 00:15:52.597 "seek_hole": false, 00:15:52.597 "seek_data": false, 00:15:52.597 "copy": true, 00:15:52.597 "nvme_iov_md": false 00:15:52.597 }, 00:15:52.597 "memory_domains": [ 00:15:52.597 { 00:15:52.597 "dma_device_id": "system", 00:15:52.597 "dma_device_type": 1 00:15:52.597 }, 00:15:52.597 { 00:15:52.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.597 "dma_device_type": 2 00:15:52.597 } 00:15:52.597 ], 00:15:52.597 "driver_specific": {} 00:15:52.597 }' 00:15:52.597 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.597 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.857 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:52.857 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.857 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.857 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:52.857 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.857 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.857 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.857 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.857 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.116 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:53.116 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:53.116 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:53.116 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:53.375 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:53.375 "name": "BaseBdev2", 00:15:53.375 "aliases": [ 00:15:53.375 "7eda5b02-9927-48f0-b935-4e0754f07e90" 00:15:53.375 ], 00:15:53.375 "product_name": "Malloc disk", 00:15:53.375 "block_size": 512, 00:15:53.375 "num_blocks": 65536, 00:15:53.375 "uuid": "7eda5b02-9927-48f0-b935-4e0754f07e90", 00:15:53.375 "assigned_rate_limits": { 00:15:53.375 "rw_ios_per_sec": 0, 00:15:53.375 "rw_mbytes_per_sec": 0, 00:15:53.375 "r_mbytes_per_sec": 0, 00:15:53.375 "w_mbytes_per_sec": 0 00:15:53.375 }, 00:15:53.375 "claimed": true, 00:15:53.375 "claim_type": "exclusive_write", 00:15:53.375 "zoned": false, 00:15:53.375 "supported_io_types": { 00:15:53.375 "read": true, 00:15:53.375 "write": true, 00:15:53.375 "unmap": true, 00:15:53.375 "flush": true, 00:15:53.375 "reset": true, 00:15:53.375 "nvme_admin": false, 00:15:53.375 "nvme_io": false, 00:15:53.375 "nvme_io_md": false, 00:15:53.375 "write_zeroes": true, 00:15:53.375 "zcopy": true, 00:15:53.375 "get_zone_info": false, 00:15:53.375 "zone_management": false, 00:15:53.375 "zone_append": false, 00:15:53.375 "compare": false, 00:15:53.375 "compare_and_write": false, 00:15:53.375 "abort": true, 00:15:53.375 "seek_hole": false, 00:15:53.375 "seek_data": false, 00:15:53.375 "copy": true, 00:15:53.375 "nvme_iov_md": false 00:15:53.375 }, 00:15:53.375 "memory_domains": [ 00:15:53.375 { 00:15:53.375 "dma_device_id": "system", 00:15:53.375 "dma_device_type": 1 00:15:53.375 }, 00:15:53.375 { 00:15:53.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.375 "dma_device_type": 2 00:15:53.375 } 00:15:53.375 ], 00:15:53.375 "driver_specific": {} 00:15:53.375 }' 00:15:53.375 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:53.375 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:53.375 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:53.375 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:53.375 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:53.375 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:53.375 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:53.375 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:53.634 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:53.634 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.634 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.634 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:53.634 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:53.634 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:53.634 13:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:53.893 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:53.893 "name": "BaseBdev3", 00:15:53.893 "aliases": [ 00:15:53.893 "89acacd9-2527-4f08-8e8a-0b122b93e18b" 00:15:53.893 ], 00:15:53.893 "product_name": "Malloc disk", 00:15:53.893 "block_size": 512, 00:15:53.893 "num_blocks": 65536, 00:15:53.893 "uuid": "89acacd9-2527-4f08-8e8a-0b122b93e18b", 00:15:53.893 "assigned_rate_limits": { 00:15:53.893 "rw_ios_per_sec": 0, 00:15:53.893 "rw_mbytes_per_sec": 0, 00:15:53.893 "r_mbytes_per_sec": 0, 00:15:53.893 "w_mbytes_per_sec": 0 00:15:53.893 }, 00:15:53.893 "claimed": true, 00:15:53.893 "claim_type": "exclusive_write", 00:15:53.893 "zoned": false, 00:15:53.893 "supported_io_types": { 00:15:53.893 "read": true, 00:15:53.893 "write": true, 00:15:53.893 "unmap": true, 00:15:53.893 "flush": true, 00:15:53.893 "reset": true, 00:15:53.893 "nvme_admin": false, 00:15:53.893 "nvme_io": false, 00:15:53.893 "nvme_io_md": false, 00:15:53.893 "write_zeroes": true, 00:15:53.893 "zcopy": true, 00:15:53.893 "get_zone_info": false, 00:15:53.893 "zone_management": false, 00:15:53.893 "zone_append": false, 00:15:53.893 "compare": false, 00:15:53.893 "compare_and_write": false, 00:15:53.893 "abort": true, 00:15:53.893 "seek_hole": false, 00:15:53.893 "seek_data": false, 00:15:53.893 "copy": true, 00:15:53.893 "nvme_iov_md": false 00:15:53.893 }, 00:15:53.893 "memory_domains": [ 00:15:53.893 { 00:15:53.893 "dma_device_id": "system", 00:15:53.893 "dma_device_type": 1 00:15:53.893 }, 00:15:53.893 { 00:15:53.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.893 "dma_device_type": 2 00:15:53.893 } 00:15:53.893 ], 00:15:53.893 "driver_specific": {} 00:15:53.893 }' 00:15:53.893 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:53.893 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:53.893 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:53.893 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:53.893 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:53.893 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:53.893 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:54.152 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:54.152 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:54.152 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:54.152 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:54.152 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:54.152 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:54.411 [2024-07-25 13:16:04.742219] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.411 [2024-07-25 13:16:04.742241] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.411 [2024-07-25 13:16:04.742285] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.411 [2024-07-25 13:16:04.742333] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.411 [2024-07-25 13:16:04.742349] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b0bbe0 name Existed_Raid, state offline 00:15:54.411 13:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 872448 00:15:54.411 13:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 872448 ']' 00:15:54.411 13:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 872448 00:15:54.411 13:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:54.411 13:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.411 13:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 872448 00:15:54.411 13:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.411 13:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.411 13:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 872448' 00:15:54.411 killing process with pid 872448 00:15:54.411 13:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 872448 00:15:54.411 [2024-07-25 13:16:04.830120] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.411 13:16:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 872448 00:15:54.411 [2024-07-25 13:16:04.871306] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.980 13:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:54.980 00:15:54.980 real 0m26.168s 00:15:54.980 user 0m47.844s 00:15:54.980 sys 0m4.743s 00:15:54.980 13:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.980 13:16:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.980 ************************************ 00:15:54.980 END TEST raid_state_function_test_sb 00:15:54.980 ************************************ 00:15:54.980 13:16:05 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:15:54.980 13:16:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:54.981 13:16:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.981 13:16:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.981 ************************************ 00:15:54.981 START TEST raid_superblock_test 00:15:54.981 ************************************ 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=877533 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 877533 /var/tmp/spdk-raid.sock 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 877533 ']' 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:54.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.981 13:16:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.981 [2024-07-25 13:16:05.301928] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:15:54.981 [2024-07-25 13:16:05.301982] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877533 ] 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:54.981 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.981 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:54.981 [2024-07-25 13:16:05.432416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.241 [2024-07-25 13:16:05.519660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.241 [2024-07-25 13:16:05.580405] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.241 [2024-07-25 13:16:05.580441] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.811 13:16:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.811 13:16:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:55.811 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:15:55.811 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:55.811 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:15:55.811 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:15:55.811 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:55.811 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.811 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.811 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.811 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:56.070 malloc1 00:15:56.070 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:56.329 [2024-07-25 13:16:06.644765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:56.329 [2024-07-25 13:16:06.644811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.329 [2024-07-25 13:16:06.644828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22482f0 00:15:56.329 [2024-07-25 13:16:06.644840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.329 [2024-07-25 13:16:06.646264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.329 [2024-07-25 13:16:06.646290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:56.329 pt1 00:15:56.329 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:56.329 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:56.329 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:15:56.329 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:15:56.329 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:56.329 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.329 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.329 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.329 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:56.588 malloc2 00:15:56.588 13:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.848 [2024-07-25 13:16:07.106290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.848 [2024-07-25 13:16:07.106328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.848 [2024-07-25 13:16:07.106343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23dff70 00:15:56.848 [2024-07-25 13:16:07.106355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.848 [2024-07-25 13:16:07.107689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.848 [2024-07-25 13:16:07.107715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.848 pt2 00:15:56.848 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:56.848 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:56.848 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:15:56.848 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:15:56.848 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:56.848 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.848 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.848 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.848 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:57.108 malloc3 00:15:57.108 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:57.108 [2024-07-25 13:16:07.567753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:57.108 [2024-07-25 13:16:07.567792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.108 [2024-07-25 13:16:07.567807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23e3830 00:15:57.108 [2024-07-25 13:16:07.567820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.108 [2024-07-25 13:16:07.569095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.108 [2024-07-25 13:16:07.569121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:57.108 pt3 00:15:57.108 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:57.108 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:57.108 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:57.367 [2024-07-25 13:16:07.788375] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:57.367 [2024-07-25 13:16:07.789479] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.367 [2024-07-25 13:16:07.789528] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:57.367 [2024-07-25 13:16:07.789648] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x23e2ab0 00:15:57.367 [2024-07-25 13:16:07.789663] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:57.367 [2024-07-25 13:16:07.789836] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23e7d60 00:15:57.367 [2024-07-25 13:16:07.789955] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23e2ab0 00:15:57.367 [2024-07-25 13:16:07.789963] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23e2ab0 00:15:57.367 [2024-07-25 13:16:07.790056] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.367 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:57.367 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:57.367 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:57.367 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:57.367 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:57.367 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:57.367 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.367 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.367 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.367 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.367 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.367 13:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.627 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:57.627 "name": "raid_bdev1", 00:15:57.627 "uuid": "10f33017-56d8-4e58-bb35-3e338332d73f", 00:15:57.627 "strip_size_kb": 64, 00:15:57.627 "state": "online", 00:15:57.627 "raid_level": "concat", 00:15:57.627 "superblock": true, 00:15:57.627 "num_base_bdevs": 3, 00:15:57.627 "num_base_bdevs_discovered": 3, 00:15:57.627 "num_base_bdevs_operational": 3, 00:15:57.627 "base_bdevs_list": [ 00:15:57.627 { 00:15:57.627 "name": "pt1", 00:15:57.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.627 "is_configured": true, 00:15:57.627 "data_offset": 2048, 00:15:57.627 "data_size": 63488 00:15:57.627 }, 00:15:57.627 { 00:15:57.627 "name": "pt2", 00:15:57.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.627 "is_configured": true, 00:15:57.627 "data_offset": 2048, 00:15:57.627 "data_size": 63488 00:15:57.627 }, 00:15:57.627 { 00:15:57.627 "name": "pt3", 00:15:57.627 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.627 "is_configured": true, 00:15:57.627 "data_offset": 2048, 00:15:57.627 "data_size": 63488 00:15:57.627 } 00:15:57.627 ] 00:15:57.627 }' 00:15:57.627 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:57.627 13:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.195 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:15:58.195 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:58.195 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:58.195 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:58.195 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:58.195 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:58.195 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:58.195 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:58.455 [2024-07-25 13:16:08.835355] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.455 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:58.455 "name": "raid_bdev1", 00:15:58.455 "aliases": [ 00:15:58.455 "10f33017-56d8-4e58-bb35-3e338332d73f" 00:15:58.455 ], 00:15:58.455 "product_name": "Raid Volume", 00:15:58.455 "block_size": 512, 00:15:58.455 "num_blocks": 190464, 00:15:58.455 "uuid": "10f33017-56d8-4e58-bb35-3e338332d73f", 00:15:58.455 "assigned_rate_limits": { 00:15:58.455 "rw_ios_per_sec": 0, 00:15:58.455 "rw_mbytes_per_sec": 0, 00:15:58.455 "r_mbytes_per_sec": 0, 00:15:58.455 "w_mbytes_per_sec": 0 00:15:58.455 }, 00:15:58.455 "claimed": false, 00:15:58.455 "zoned": false, 00:15:58.455 "supported_io_types": { 00:15:58.455 "read": true, 00:15:58.455 "write": true, 00:15:58.455 "unmap": true, 00:15:58.455 "flush": true, 00:15:58.455 "reset": true, 00:15:58.455 "nvme_admin": false, 00:15:58.455 "nvme_io": false, 00:15:58.455 "nvme_io_md": false, 00:15:58.455 "write_zeroes": true, 00:15:58.455 "zcopy": false, 00:15:58.455 "get_zone_info": false, 00:15:58.455 "zone_management": false, 00:15:58.455 "zone_append": false, 00:15:58.455 "compare": false, 00:15:58.455 "compare_and_write": false, 00:15:58.455 "abort": false, 00:15:58.455 "seek_hole": false, 00:15:58.455 "seek_data": false, 00:15:58.455 "copy": false, 00:15:58.455 "nvme_iov_md": false 00:15:58.455 }, 00:15:58.455 "memory_domains": [ 00:15:58.455 { 00:15:58.455 "dma_device_id": "system", 00:15:58.455 "dma_device_type": 1 00:15:58.455 }, 00:15:58.455 { 00:15:58.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.455 "dma_device_type": 2 00:15:58.455 }, 00:15:58.455 { 00:15:58.455 "dma_device_id": "system", 00:15:58.455 "dma_device_type": 1 00:15:58.455 }, 00:15:58.455 { 00:15:58.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.455 "dma_device_type": 2 00:15:58.455 }, 00:15:58.455 { 00:15:58.455 "dma_device_id": "system", 00:15:58.455 "dma_device_type": 1 00:15:58.455 }, 00:15:58.455 { 00:15:58.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.455 "dma_device_type": 2 00:15:58.455 } 00:15:58.455 ], 00:15:58.455 "driver_specific": { 00:15:58.455 "raid": { 00:15:58.455 "uuid": "10f33017-56d8-4e58-bb35-3e338332d73f", 00:15:58.455 "strip_size_kb": 64, 00:15:58.455 "state": "online", 00:15:58.455 "raid_level": "concat", 00:15:58.455 "superblock": true, 00:15:58.455 "num_base_bdevs": 3, 00:15:58.455 "num_base_bdevs_discovered": 3, 00:15:58.455 "num_base_bdevs_operational": 3, 00:15:58.455 "base_bdevs_list": [ 00:15:58.455 { 00:15:58.455 "name": "pt1", 00:15:58.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.455 "is_configured": true, 00:15:58.455 "data_offset": 2048, 00:15:58.455 "data_size": 63488 00:15:58.455 }, 00:15:58.455 { 00:15:58.455 "name": "pt2", 00:15:58.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.455 "is_configured": true, 00:15:58.455 "data_offset": 2048, 00:15:58.455 "data_size": 63488 00:15:58.455 }, 00:15:58.455 { 00:15:58.455 "name": "pt3", 00:15:58.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.455 "is_configured": true, 00:15:58.455 "data_offset": 2048, 00:15:58.455 "data_size": 63488 00:15:58.455 } 00:15:58.455 ] 00:15:58.455 } 00:15:58.455 } 00:15:58.455 }' 00:15:58.455 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.455 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:58.455 pt2 00:15:58.455 pt3' 00:15:58.455 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:58.455 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:58.455 13:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:58.715 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:58.715 "name": "pt1", 00:15:58.715 "aliases": [ 00:15:58.715 "00000000-0000-0000-0000-000000000001" 00:15:58.715 ], 00:15:58.715 "product_name": "passthru", 00:15:58.715 "block_size": 512, 00:15:58.715 "num_blocks": 65536, 00:15:58.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.715 "assigned_rate_limits": { 00:15:58.715 "rw_ios_per_sec": 0, 00:15:58.715 "rw_mbytes_per_sec": 0, 00:15:58.715 "r_mbytes_per_sec": 0, 00:15:58.715 "w_mbytes_per_sec": 0 00:15:58.715 }, 00:15:58.715 "claimed": true, 00:15:58.715 "claim_type": "exclusive_write", 00:15:58.715 "zoned": false, 00:15:58.715 "supported_io_types": { 00:15:58.715 "read": true, 00:15:58.715 "write": true, 00:15:58.715 "unmap": true, 00:15:58.715 "flush": true, 00:15:58.715 "reset": true, 00:15:58.715 "nvme_admin": false, 00:15:58.715 "nvme_io": false, 00:15:58.715 "nvme_io_md": false, 00:15:58.715 "write_zeroes": true, 00:15:58.715 "zcopy": true, 00:15:58.715 "get_zone_info": false, 00:15:58.715 "zone_management": false, 00:15:58.715 "zone_append": false, 00:15:58.715 "compare": false, 00:15:58.715 "compare_and_write": false, 00:15:58.715 "abort": true, 00:15:58.715 "seek_hole": false, 00:15:58.715 "seek_data": false, 00:15:58.715 "copy": true, 00:15:58.715 "nvme_iov_md": false 00:15:58.715 }, 00:15:58.715 "memory_domains": [ 00:15:58.715 { 00:15:58.715 "dma_device_id": "system", 00:15:58.715 "dma_device_type": 1 00:15:58.715 }, 00:15:58.715 { 00:15:58.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.715 "dma_device_type": 2 00:15:58.715 } 00:15:58.715 ], 00:15:58.715 "driver_specific": { 00:15:58.715 "passthru": { 00:15:58.715 "name": "pt1", 00:15:58.715 "base_bdev_name": "malloc1" 00:15:58.715 } 00:15:58.715 } 00:15:58.715 }' 00:15:58.715 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.715 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.975 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:58.975 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.975 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.975 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:58.975 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.975 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.975 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:58.975 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.975 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.975 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:58.975 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:59.235 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:59.235 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:59.235 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:59.235 "name": "pt2", 00:15:59.235 "aliases": [ 00:15:59.235 "00000000-0000-0000-0000-000000000002" 00:15:59.235 ], 00:15:59.235 "product_name": "passthru", 00:15:59.235 "block_size": 512, 00:15:59.235 "num_blocks": 65536, 00:15:59.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.235 "assigned_rate_limits": { 00:15:59.235 "rw_ios_per_sec": 0, 00:15:59.235 "rw_mbytes_per_sec": 0, 00:15:59.235 "r_mbytes_per_sec": 0, 00:15:59.235 "w_mbytes_per_sec": 0 00:15:59.235 }, 00:15:59.235 "claimed": true, 00:15:59.235 "claim_type": "exclusive_write", 00:15:59.235 "zoned": false, 00:15:59.235 "supported_io_types": { 00:15:59.235 "read": true, 00:15:59.235 "write": true, 00:15:59.235 "unmap": true, 00:15:59.235 "flush": true, 00:15:59.236 "reset": true, 00:15:59.236 "nvme_admin": false, 00:15:59.236 "nvme_io": false, 00:15:59.236 "nvme_io_md": false, 00:15:59.236 "write_zeroes": true, 00:15:59.236 "zcopy": true, 00:15:59.236 "get_zone_info": false, 00:15:59.236 "zone_management": false, 00:15:59.236 "zone_append": false, 00:15:59.236 "compare": false, 00:15:59.236 "compare_and_write": false, 00:15:59.236 "abort": true, 00:15:59.236 "seek_hole": false, 00:15:59.236 "seek_data": false, 00:15:59.236 "copy": true, 00:15:59.236 "nvme_iov_md": false 00:15:59.236 }, 00:15:59.236 "memory_domains": [ 00:15:59.236 { 00:15:59.236 "dma_device_id": "system", 00:15:59.236 "dma_device_type": 1 00:15:59.236 }, 00:15:59.236 { 00:15:59.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.236 "dma_device_type": 2 00:15:59.236 } 00:15:59.236 ], 00:15:59.236 "driver_specific": { 00:15:59.236 "passthru": { 00:15:59.236 "name": "pt2", 00:15:59.236 "base_bdev_name": "malloc2" 00:15:59.236 } 00:15:59.236 } 00:15:59.236 }' 00:15:59.236 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:59.518 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:59.518 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:59.518 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:59.518 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:59.518 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:59.518 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:59.518 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:59.518 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:59.518 13:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.518 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.788 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:59.788 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:59.788 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:59.788 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:00.048 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:00.048 "name": "pt3", 00:16:00.048 "aliases": [ 00:16:00.048 "00000000-0000-0000-0000-000000000003" 00:16:00.048 ], 00:16:00.048 "product_name": "passthru", 00:16:00.048 "block_size": 512, 00:16:00.048 "num_blocks": 65536, 00:16:00.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.048 "assigned_rate_limits": { 00:16:00.048 "rw_ios_per_sec": 0, 00:16:00.048 "rw_mbytes_per_sec": 0, 00:16:00.048 "r_mbytes_per_sec": 0, 00:16:00.048 "w_mbytes_per_sec": 0 00:16:00.048 }, 00:16:00.048 "claimed": true, 00:16:00.048 "claim_type": "exclusive_write", 00:16:00.048 "zoned": false, 00:16:00.048 "supported_io_types": { 00:16:00.048 "read": true, 00:16:00.048 "write": true, 00:16:00.048 "unmap": true, 00:16:00.048 "flush": true, 00:16:00.048 "reset": true, 00:16:00.048 "nvme_admin": false, 00:16:00.048 "nvme_io": false, 00:16:00.048 "nvme_io_md": false, 00:16:00.048 "write_zeroes": true, 00:16:00.048 "zcopy": true, 00:16:00.048 "get_zone_info": false, 00:16:00.048 "zone_management": false, 00:16:00.048 "zone_append": false, 00:16:00.048 "compare": false, 00:16:00.048 "compare_and_write": false, 00:16:00.048 "abort": true, 00:16:00.048 "seek_hole": false, 00:16:00.048 "seek_data": false, 00:16:00.048 "copy": true, 00:16:00.048 "nvme_iov_md": false 00:16:00.048 }, 00:16:00.048 "memory_domains": [ 00:16:00.048 { 00:16:00.048 "dma_device_id": "system", 00:16:00.048 "dma_device_type": 1 00:16:00.048 }, 00:16:00.048 { 00:16:00.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.048 "dma_device_type": 2 00:16:00.048 } 00:16:00.048 ], 00:16:00.048 "driver_specific": { 00:16:00.048 "passthru": { 00:16:00.048 "name": "pt3", 00:16:00.048 "base_bdev_name": "malloc3" 00:16:00.048 } 00:16:00.048 } 00:16:00.048 }' 00:16:00.048 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.048 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.048 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:00.048 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.048 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.048 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:00.048 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.048 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.048 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:00.048 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.308 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.308 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:00.308 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:00.308 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:16:00.567 [2024-07-25 13:16:10.832866] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.567 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=10f33017-56d8-4e58-bb35-3e338332d73f 00:16:00.567 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 10f33017-56d8-4e58-bb35-3e338332d73f ']' 00:16:00.567 13:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:00.827 [2024-07-25 13:16:11.061195] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.827 [2024-07-25 13:16:11.061210] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.827 [2024-07-25 13:16:11.061256] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.827 [2024-07-25 13:16:11.061302] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.827 [2024-07-25 13:16:11.061313] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23e2ab0 name raid_bdev1, state offline 00:16:00.827 13:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.827 13:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:16:00.827 13:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:16:00.827 13:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:16:00.827 13:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:00.827 13:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:01.086 13:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.086 13:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:01.345 13:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.345 13:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:01.604 13:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:01.604 13:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:16:01.864 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:02.124 [2024-07-25 13:16:12.440799] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:02.124 [2024-07-25 13:16:12.442095] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:02.124 [2024-07-25 13:16:12.442137] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:02.124 [2024-07-25 13:16:12.442190] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:02.124 [2024-07-25 13:16:12.442230] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:02.124 [2024-07-25 13:16:12.442251] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:02.124 [2024-07-25 13:16:12.442274] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.124 [2024-07-25 13:16:12.442283] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23e2a80 name raid_bdev1, state configuring 00:16:02.124 request: 00:16:02.124 { 00:16:02.124 "name": "raid_bdev1", 00:16:02.124 "raid_level": "concat", 00:16:02.124 "base_bdevs": [ 00:16:02.124 "malloc1", 00:16:02.124 "malloc2", 00:16:02.124 "malloc3" 00:16:02.124 ], 00:16:02.124 "strip_size_kb": 64, 00:16:02.124 "superblock": false, 00:16:02.124 "method": "bdev_raid_create", 00:16:02.124 "req_id": 1 00:16:02.124 } 00:16:02.124 Got JSON-RPC error response 00:16:02.124 response: 00:16:02.124 { 00:16:02.124 "code": -17, 00:16:02.124 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:02.124 } 00:16:02.124 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:02.124 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:02.124 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:02.124 13:16:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:02.124 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.124 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:16:02.383 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:16:02.383 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:16:02.383 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.643 [2024-07-25 13:16:12.901946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.643 [2024-07-25 13:16:12.901985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.643 [2024-07-25 13:16:12.902001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23e2a80 00:16:02.643 [2024-07-25 13:16:12.902013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.643 [2024-07-25 13:16:12.903502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.643 [2024-07-25 13:16:12.903529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.643 [2024-07-25 13:16:12.903589] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:02.643 [2024-07-25 13:16:12.903614] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.643 pt1 00:16:02.643 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:02.643 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:02.643 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.643 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:02.643 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:02.643 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:02.643 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.643 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.643 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.643 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.643 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.643 13:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.902 13:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.902 "name": "raid_bdev1", 00:16:02.902 "uuid": "10f33017-56d8-4e58-bb35-3e338332d73f", 00:16:02.902 "strip_size_kb": 64, 00:16:02.902 "state": "configuring", 00:16:02.902 "raid_level": "concat", 00:16:02.902 "superblock": true, 00:16:02.902 "num_base_bdevs": 3, 00:16:02.902 "num_base_bdevs_discovered": 1, 00:16:02.902 "num_base_bdevs_operational": 3, 00:16:02.902 "base_bdevs_list": [ 00:16:02.902 { 00:16:02.902 "name": "pt1", 00:16:02.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.902 "is_configured": true, 00:16:02.902 "data_offset": 2048, 00:16:02.902 "data_size": 63488 00:16:02.902 }, 00:16:02.902 { 00:16:02.902 "name": null, 00:16:02.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.902 "is_configured": false, 00:16:02.902 "data_offset": 2048, 00:16:02.902 "data_size": 63488 00:16:02.902 }, 00:16:02.902 { 00:16:02.902 "name": null, 00:16:02.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.902 "is_configured": false, 00:16:02.902 "data_offset": 2048, 00:16:02.902 "data_size": 63488 00:16:02.902 } 00:16:02.902 ] 00:16:02.902 }' 00:16:02.902 13:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.902 13:16:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.468 13:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:16:03.468 13:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.468 [2024-07-25 13:16:13.924843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.468 [2024-07-25 13:16:13.924890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.468 [2024-07-25 13:16:13.924910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23e01a0 00:16:03.468 [2024-07-25 13:16:13.924922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.468 [2024-07-25 13:16:13.925236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.468 [2024-07-25 13:16:13.925253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.468 [2024-07-25 13:16:13.925308] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:03.468 [2024-07-25 13:16:13.925326] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.468 pt2 00:16:03.468 13:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:03.727 [2024-07-25 13:16:14.153447] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:03.727 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:03.727 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:03.727 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.727 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:03.727 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:03.727 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:03.727 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.727 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.727 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.727 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.727 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.727 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.987 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.987 "name": "raid_bdev1", 00:16:03.987 "uuid": "10f33017-56d8-4e58-bb35-3e338332d73f", 00:16:03.987 "strip_size_kb": 64, 00:16:03.987 "state": "configuring", 00:16:03.987 "raid_level": "concat", 00:16:03.987 "superblock": true, 00:16:03.987 "num_base_bdevs": 3, 00:16:03.987 "num_base_bdevs_discovered": 1, 00:16:03.987 "num_base_bdevs_operational": 3, 00:16:03.987 "base_bdevs_list": [ 00:16:03.987 { 00:16:03.987 "name": "pt1", 00:16:03.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.987 "is_configured": true, 00:16:03.987 "data_offset": 2048, 00:16:03.987 "data_size": 63488 00:16:03.987 }, 00:16:03.987 { 00:16:03.987 "name": null, 00:16:03.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.987 "is_configured": false, 00:16:03.987 "data_offset": 2048, 00:16:03.987 "data_size": 63488 00:16:03.987 }, 00:16:03.987 { 00:16:03.987 "name": null, 00:16:03.987 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.987 "is_configured": false, 00:16:03.987 "data_offset": 2048, 00:16:03.987 "data_size": 63488 00:16:03.987 } 00:16:03.987 ] 00:16:03.987 }' 00:16:03.987 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.987 13:16:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.556 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:16:04.556 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:04.556 13:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.815 [2024-07-25 13:16:15.172118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.815 [2024-07-25 13:16:15.172170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.815 [2024-07-25 13:16:15.172187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23e13a0 00:16:04.815 [2024-07-25 13:16:15.172198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.815 [2024-07-25 13:16:15.172517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.815 [2024-07-25 13:16:15.172535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.815 [2024-07-25 13:16:15.172591] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:04.815 [2024-07-25 13:16:15.172609] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.815 pt2 00:16:04.815 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:04.815 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:04.815 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:05.075 [2024-07-25 13:16:15.400723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:05.075 [2024-07-25 13:16:15.400758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.075 [2024-07-25 13:16:15.400773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23e41c0 00:16:05.075 [2024-07-25 13:16:15.400784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.075 [2024-07-25 13:16:15.401061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.075 [2024-07-25 13:16:15.401077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:05.075 [2024-07-25 13:16:15.401125] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:05.075 [2024-07-25 13:16:15.401150] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:05.075 [2024-07-25 13:16:15.401248] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x23e59f0 00:16:05.075 [2024-07-25 13:16:15.401257] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:05.075 [2024-07-25 13:16:15.401411] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23eb280 00:16:05.075 [2024-07-25 13:16:15.401524] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23e59f0 00:16:05.075 [2024-07-25 13:16:15.401533] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23e59f0 00:16:05.075 [2024-07-25 13:16:15.401617] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.075 pt3 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.075 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.335 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:05.335 "name": "raid_bdev1", 00:16:05.335 "uuid": "10f33017-56d8-4e58-bb35-3e338332d73f", 00:16:05.335 "strip_size_kb": 64, 00:16:05.335 "state": "online", 00:16:05.335 "raid_level": "concat", 00:16:05.335 "superblock": true, 00:16:05.335 "num_base_bdevs": 3, 00:16:05.335 "num_base_bdevs_discovered": 3, 00:16:05.335 "num_base_bdevs_operational": 3, 00:16:05.335 "base_bdevs_list": [ 00:16:05.335 { 00:16:05.335 "name": "pt1", 00:16:05.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:05.335 "is_configured": true, 00:16:05.335 "data_offset": 2048, 00:16:05.335 "data_size": 63488 00:16:05.335 }, 00:16:05.335 { 00:16:05.335 "name": "pt2", 00:16:05.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.335 "is_configured": true, 00:16:05.335 "data_offset": 2048, 00:16:05.335 "data_size": 63488 00:16:05.335 }, 00:16:05.335 { 00:16:05.335 "name": "pt3", 00:16:05.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.335 "is_configured": true, 00:16:05.335 "data_offset": 2048, 00:16:05.335 "data_size": 63488 00:16:05.335 } 00:16:05.335 ] 00:16:05.335 }' 00:16:05.335 13:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:05.335 13:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.902 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:16:05.902 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:05.902 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:05.902 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:05.902 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:05.902 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:05.902 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:05.902 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:06.161 [2024-07-25 13:16:16.443717] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.161 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:06.161 "name": "raid_bdev1", 00:16:06.161 "aliases": [ 00:16:06.161 "10f33017-56d8-4e58-bb35-3e338332d73f" 00:16:06.161 ], 00:16:06.161 "product_name": "Raid Volume", 00:16:06.161 "block_size": 512, 00:16:06.161 "num_blocks": 190464, 00:16:06.161 "uuid": "10f33017-56d8-4e58-bb35-3e338332d73f", 00:16:06.161 "assigned_rate_limits": { 00:16:06.161 "rw_ios_per_sec": 0, 00:16:06.161 "rw_mbytes_per_sec": 0, 00:16:06.161 "r_mbytes_per_sec": 0, 00:16:06.161 "w_mbytes_per_sec": 0 00:16:06.161 }, 00:16:06.161 "claimed": false, 00:16:06.161 "zoned": false, 00:16:06.161 "supported_io_types": { 00:16:06.161 "read": true, 00:16:06.161 "write": true, 00:16:06.161 "unmap": true, 00:16:06.161 "flush": true, 00:16:06.161 "reset": true, 00:16:06.161 "nvme_admin": false, 00:16:06.161 "nvme_io": false, 00:16:06.161 "nvme_io_md": false, 00:16:06.161 "write_zeroes": true, 00:16:06.161 "zcopy": false, 00:16:06.161 "get_zone_info": false, 00:16:06.161 "zone_management": false, 00:16:06.161 "zone_append": false, 00:16:06.161 "compare": false, 00:16:06.161 "compare_and_write": false, 00:16:06.161 "abort": false, 00:16:06.161 "seek_hole": false, 00:16:06.161 "seek_data": false, 00:16:06.161 "copy": false, 00:16:06.161 "nvme_iov_md": false 00:16:06.161 }, 00:16:06.161 "memory_domains": [ 00:16:06.161 { 00:16:06.161 "dma_device_id": "system", 00:16:06.161 "dma_device_type": 1 00:16:06.161 }, 00:16:06.161 { 00:16:06.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.161 "dma_device_type": 2 00:16:06.161 }, 00:16:06.161 { 00:16:06.161 "dma_device_id": "system", 00:16:06.161 "dma_device_type": 1 00:16:06.161 }, 00:16:06.161 { 00:16:06.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.161 "dma_device_type": 2 00:16:06.161 }, 00:16:06.161 { 00:16:06.161 "dma_device_id": "system", 00:16:06.161 "dma_device_type": 1 00:16:06.161 }, 00:16:06.161 { 00:16:06.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.161 "dma_device_type": 2 00:16:06.161 } 00:16:06.161 ], 00:16:06.161 "driver_specific": { 00:16:06.161 "raid": { 00:16:06.161 "uuid": "10f33017-56d8-4e58-bb35-3e338332d73f", 00:16:06.161 "strip_size_kb": 64, 00:16:06.161 "state": "online", 00:16:06.161 "raid_level": "concat", 00:16:06.161 "superblock": true, 00:16:06.161 "num_base_bdevs": 3, 00:16:06.161 "num_base_bdevs_discovered": 3, 00:16:06.161 "num_base_bdevs_operational": 3, 00:16:06.161 "base_bdevs_list": [ 00:16:06.161 { 00:16:06.161 "name": "pt1", 00:16:06.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:06.161 "is_configured": true, 00:16:06.161 "data_offset": 2048, 00:16:06.161 "data_size": 63488 00:16:06.161 }, 00:16:06.161 { 00:16:06.162 "name": "pt2", 00:16:06.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.162 "is_configured": true, 00:16:06.162 "data_offset": 2048, 00:16:06.162 "data_size": 63488 00:16:06.162 }, 00:16:06.162 { 00:16:06.162 "name": "pt3", 00:16:06.162 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.162 "is_configured": true, 00:16:06.162 "data_offset": 2048, 00:16:06.162 "data_size": 63488 00:16:06.162 } 00:16:06.162 ] 00:16:06.162 } 00:16:06.162 } 00:16:06.162 }' 00:16:06.162 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:06.162 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:06.162 pt2 00:16:06.162 pt3' 00:16:06.162 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:06.162 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:06.162 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:06.421 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:06.421 "name": "pt1", 00:16:06.421 "aliases": [ 00:16:06.421 "00000000-0000-0000-0000-000000000001" 00:16:06.421 ], 00:16:06.421 "product_name": "passthru", 00:16:06.421 "block_size": 512, 00:16:06.421 "num_blocks": 65536, 00:16:06.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:06.421 "assigned_rate_limits": { 00:16:06.421 "rw_ios_per_sec": 0, 00:16:06.421 "rw_mbytes_per_sec": 0, 00:16:06.421 "r_mbytes_per_sec": 0, 00:16:06.421 "w_mbytes_per_sec": 0 00:16:06.421 }, 00:16:06.421 "claimed": true, 00:16:06.421 "claim_type": "exclusive_write", 00:16:06.421 "zoned": false, 00:16:06.421 "supported_io_types": { 00:16:06.421 "read": true, 00:16:06.421 "write": true, 00:16:06.421 "unmap": true, 00:16:06.421 "flush": true, 00:16:06.421 "reset": true, 00:16:06.421 "nvme_admin": false, 00:16:06.421 "nvme_io": false, 00:16:06.421 "nvme_io_md": false, 00:16:06.421 "write_zeroes": true, 00:16:06.421 "zcopy": true, 00:16:06.421 "get_zone_info": false, 00:16:06.421 "zone_management": false, 00:16:06.421 "zone_append": false, 00:16:06.421 "compare": false, 00:16:06.421 "compare_and_write": false, 00:16:06.421 "abort": true, 00:16:06.421 "seek_hole": false, 00:16:06.421 "seek_data": false, 00:16:06.421 "copy": true, 00:16:06.421 "nvme_iov_md": false 00:16:06.421 }, 00:16:06.421 "memory_domains": [ 00:16:06.421 { 00:16:06.421 "dma_device_id": "system", 00:16:06.421 "dma_device_type": 1 00:16:06.421 }, 00:16:06.421 { 00:16:06.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.421 "dma_device_type": 2 00:16:06.421 } 00:16:06.421 ], 00:16:06.421 "driver_specific": { 00:16:06.421 "passthru": { 00:16:06.421 "name": "pt1", 00:16:06.421 "base_bdev_name": "malloc1" 00:16:06.421 } 00:16:06.421 } 00:16:06.421 }' 00:16:06.421 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.421 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.421 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:06.421 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.680 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.680 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:06.680 13:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.680 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.680 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:06.680 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.680 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.680 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:06.680 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:06.680 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:06.680 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:06.940 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:06.940 "name": "pt2", 00:16:06.940 "aliases": [ 00:16:06.940 "00000000-0000-0000-0000-000000000002" 00:16:06.940 ], 00:16:06.940 "product_name": "passthru", 00:16:06.940 "block_size": 512, 00:16:06.940 "num_blocks": 65536, 00:16:06.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.940 "assigned_rate_limits": { 00:16:06.940 "rw_ios_per_sec": 0, 00:16:06.940 "rw_mbytes_per_sec": 0, 00:16:06.940 "r_mbytes_per_sec": 0, 00:16:06.940 "w_mbytes_per_sec": 0 00:16:06.940 }, 00:16:06.940 "claimed": true, 00:16:06.940 "claim_type": "exclusive_write", 00:16:06.940 "zoned": false, 00:16:06.940 "supported_io_types": { 00:16:06.940 "read": true, 00:16:06.940 "write": true, 00:16:06.940 "unmap": true, 00:16:06.940 "flush": true, 00:16:06.940 "reset": true, 00:16:06.940 "nvme_admin": false, 00:16:06.940 "nvme_io": false, 00:16:06.940 "nvme_io_md": false, 00:16:06.940 "write_zeroes": true, 00:16:06.940 "zcopy": true, 00:16:06.940 "get_zone_info": false, 00:16:06.940 "zone_management": false, 00:16:06.940 "zone_append": false, 00:16:06.940 "compare": false, 00:16:06.940 "compare_and_write": false, 00:16:06.940 "abort": true, 00:16:06.940 "seek_hole": false, 00:16:06.940 "seek_data": false, 00:16:06.940 "copy": true, 00:16:06.940 "nvme_iov_md": false 00:16:06.940 }, 00:16:06.940 "memory_domains": [ 00:16:06.940 { 00:16:06.941 "dma_device_id": "system", 00:16:06.941 "dma_device_type": 1 00:16:06.941 }, 00:16:06.941 { 00:16:06.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.941 "dma_device_type": 2 00:16:06.941 } 00:16:06.941 ], 00:16:06.941 "driver_specific": { 00:16:06.941 "passthru": { 00:16:06.941 "name": "pt2", 00:16:06.941 "base_bdev_name": "malloc2" 00:16:06.941 } 00:16:06.941 } 00:16:06.941 }' 00:16:06.941 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.201 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.201 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.201 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.201 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.201 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:07.201 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.201 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.201 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.201 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.460 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.460 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.460 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:07.460 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:07.460 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:07.720 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:07.720 "name": "pt3", 00:16:07.720 "aliases": [ 00:16:07.720 "00000000-0000-0000-0000-000000000003" 00:16:07.720 ], 00:16:07.720 "product_name": "passthru", 00:16:07.720 "block_size": 512, 00:16:07.720 "num_blocks": 65536, 00:16:07.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.720 "assigned_rate_limits": { 00:16:07.720 "rw_ios_per_sec": 0, 00:16:07.720 "rw_mbytes_per_sec": 0, 00:16:07.720 "r_mbytes_per_sec": 0, 00:16:07.720 "w_mbytes_per_sec": 0 00:16:07.720 }, 00:16:07.720 "claimed": true, 00:16:07.720 "claim_type": "exclusive_write", 00:16:07.720 "zoned": false, 00:16:07.720 "supported_io_types": { 00:16:07.720 "read": true, 00:16:07.720 "write": true, 00:16:07.720 "unmap": true, 00:16:07.720 "flush": true, 00:16:07.720 "reset": true, 00:16:07.720 "nvme_admin": false, 00:16:07.720 "nvme_io": false, 00:16:07.720 "nvme_io_md": false, 00:16:07.720 "write_zeroes": true, 00:16:07.720 "zcopy": true, 00:16:07.720 "get_zone_info": false, 00:16:07.720 "zone_management": false, 00:16:07.720 "zone_append": false, 00:16:07.720 "compare": false, 00:16:07.720 "compare_and_write": false, 00:16:07.720 "abort": true, 00:16:07.720 "seek_hole": false, 00:16:07.720 "seek_data": false, 00:16:07.720 "copy": true, 00:16:07.720 "nvme_iov_md": false 00:16:07.720 }, 00:16:07.720 "memory_domains": [ 00:16:07.720 { 00:16:07.720 "dma_device_id": "system", 00:16:07.720 "dma_device_type": 1 00:16:07.720 }, 00:16:07.720 { 00:16:07.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.720 "dma_device_type": 2 00:16:07.720 } 00:16:07.720 ], 00:16:07.720 "driver_specific": { 00:16:07.720 "passthru": { 00:16:07.720 "name": "pt3", 00:16:07.720 "base_bdev_name": "malloc3" 00:16:07.720 } 00:16:07.720 } 00:16:07.720 }' 00:16:07.720 13:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.720 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.720 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.720 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.720 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.720 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:07.720 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.720 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.979 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.979 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.979 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.979 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.979 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:07.979 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:16:08.241 [2024-07-25 13:16:18.501224] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.241 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 10f33017-56d8-4e58-bb35-3e338332d73f '!=' 10f33017-56d8-4e58-bb35-3e338332d73f ']' 00:16:08.241 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:16:08.241 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:08.241 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:08.241 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 877533 00:16:08.241 13:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 877533 ']' 00:16:08.241 13:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 877533 00:16:08.241 13:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:08.241 13:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:08.242 13:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 877533 00:16:08.242 13:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:08.242 13:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:08.242 13:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 877533' 00:16:08.242 killing process with pid 877533 00:16:08.242 13:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 877533 00:16:08.242 [2024-07-25 13:16:18.579673] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.242 [2024-07-25 13:16:18.579723] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.242 [2024-07-25 13:16:18.579771] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.242 [2024-07-25 13:16:18.579781] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23e59f0 name raid_bdev1, state offline 00:16:08.242 13:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 877533 00:16:08.242 [2024-07-25 13:16:18.602931] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.501 13:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:16:08.501 00:16:08.501 real 0m13.549s 00:16:08.501 user 0m24.354s 00:16:08.501 sys 0m2.458s 00:16:08.501 13:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.501 13:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.501 ************************************ 00:16:08.501 END TEST raid_superblock_test 00:16:08.501 ************************************ 00:16:08.501 13:16:18 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:16:08.501 13:16:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:08.501 13:16:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.501 13:16:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.501 ************************************ 00:16:08.501 START TEST raid_read_error_test 00:16:08.501 ************************************ 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:16:08.501 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.nzp6b083FE 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=880212 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 880212 /var/tmp/spdk-raid.sock 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 880212 ']' 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:08.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.502 13:16:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.502 [2024-07-25 13:16:18.934320] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:16:08.502 [2024-07-25 13:16:18.934376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880212 ] 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:08.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.761 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:08.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:08.762 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:08.762 [2024-07-25 13:16:19.064381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.762 [2024-07-25 13:16:19.152032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.762 [2024-07-25 13:16:19.213332] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.762 [2024-07-25 13:16:19.213375] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.700 13:16:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.700 13:16:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:09.700 13:16:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:09.700 13:16:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:09.700 BaseBdev1_malloc 00:16:09.700 13:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:09.959 true 00:16:09.959 13:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:10.219 [2024-07-25 13:16:20.498649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:10.219 [2024-07-25 13:16:20.498689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.219 [2024-07-25 13:16:20.498709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe921d0 00:16:10.219 [2024-07-25 13:16:20.498721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.219 [2024-07-25 13:16:20.500301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.219 [2024-07-25 13:16:20.500331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.219 BaseBdev1 00:16:10.219 13:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:10.219 13:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:10.478 BaseBdev2_malloc 00:16:10.478 13:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:10.478 true 00:16:10.738 13:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:10.738 [2024-07-25 13:16:21.176644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:10.738 [2024-07-25 13:16:21.176682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.738 [2024-07-25 13:16:21.176700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe95710 00:16:10.738 [2024-07-25 13:16:21.176711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.738 [2024-07-25 13:16:21.178087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.738 [2024-07-25 13:16:21.178115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:10.738 BaseBdev2 00:16:10.738 13:16:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:10.738 13:16:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:10.997 BaseBdev3_malloc 00:16:10.997 13:16:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:11.256 true 00:16:11.256 13:16:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:11.516 [2024-07-25 13:16:21.850749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:11.516 [2024-07-25 13:16:21.850788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.516 [2024-07-25 13:16:21.850810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe97de0 00:16:11.516 [2024-07-25 13:16:21.850824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.516 [2024-07-25 13:16:21.852186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.516 [2024-07-25 13:16:21.852214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:11.516 BaseBdev3 00:16:11.516 13:16:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:11.775 [2024-07-25 13:16:22.075369] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.775 [2024-07-25 13:16:22.076541] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.775 [2024-07-25 13:16:22.076604] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.775 [2024-07-25 13:16:22.076777] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xe99780 00:16:11.775 [2024-07-25 13:16:22.076788] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:11.775 [2024-07-25 13:16:22.076974] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe9e180 00:16:11.775 [2024-07-25 13:16:22.077104] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe99780 00:16:11.775 [2024-07-25 13:16:22.077113] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe99780 00:16:11.775 [2024-07-25 13:16:22.077229] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.775 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:11.775 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:11.775 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:11.775 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:11.775 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:11.776 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:11.776 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.776 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.776 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.776 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.776 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.776 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.035 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.035 "name": "raid_bdev1", 00:16:12.035 "uuid": "d9924c7b-3f64-42f0-b050-464edcbc8c05", 00:16:12.035 "strip_size_kb": 64, 00:16:12.035 "state": "online", 00:16:12.035 "raid_level": "concat", 00:16:12.035 "superblock": true, 00:16:12.035 "num_base_bdevs": 3, 00:16:12.035 "num_base_bdevs_discovered": 3, 00:16:12.035 "num_base_bdevs_operational": 3, 00:16:12.035 "base_bdevs_list": [ 00:16:12.035 { 00:16:12.035 "name": "BaseBdev1", 00:16:12.035 "uuid": "f676b5d1-d0ec-515a-9435-80eb80fbacba", 00:16:12.035 "is_configured": true, 00:16:12.035 "data_offset": 2048, 00:16:12.035 "data_size": 63488 00:16:12.035 }, 00:16:12.035 { 00:16:12.035 "name": "BaseBdev2", 00:16:12.035 "uuid": "bddfa8df-67a3-5673-af3c-73d6cfa237e9", 00:16:12.035 "is_configured": true, 00:16:12.035 "data_offset": 2048, 00:16:12.035 "data_size": 63488 00:16:12.035 }, 00:16:12.035 { 00:16:12.035 "name": "BaseBdev3", 00:16:12.035 "uuid": "019df1bd-2f4e-57b4-9722-e150ab77ee01", 00:16:12.035 "is_configured": true, 00:16:12.035 "data_offset": 2048, 00:16:12.035 "data_size": 63488 00:16:12.035 } 00:16:12.035 ] 00:16:12.035 }' 00:16:12.035 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.035 13:16:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.604 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:12.604 13:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:12.604 [2024-07-25 13:16:23.018088] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe9aab0 00:16:13.543 13:16:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:13.864 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:13.864 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:16:13.864 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:16:13.864 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:13.864 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:13.864 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:13.864 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:13.864 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:13.864 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:13.864 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.865 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.865 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.865 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.865 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.865 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.123 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.123 "name": "raid_bdev1", 00:16:14.123 "uuid": "d9924c7b-3f64-42f0-b050-464edcbc8c05", 00:16:14.123 "strip_size_kb": 64, 00:16:14.123 "state": "online", 00:16:14.123 "raid_level": "concat", 00:16:14.123 "superblock": true, 00:16:14.123 "num_base_bdevs": 3, 00:16:14.123 "num_base_bdevs_discovered": 3, 00:16:14.123 "num_base_bdevs_operational": 3, 00:16:14.123 "base_bdevs_list": [ 00:16:14.123 { 00:16:14.123 "name": "BaseBdev1", 00:16:14.123 "uuid": "f676b5d1-d0ec-515a-9435-80eb80fbacba", 00:16:14.123 "is_configured": true, 00:16:14.123 "data_offset": 2048, 00:16:14.123 "data_size": 63488 00:16:14.123 }, 00:16:14.123 { 00:16:14.123 "name": "BaseBdev2", 00:16:14.123 "uuid": "bddfa8df-67a3-5673-af3c-73d6cfa237e9", 00:16:14.123 "is_configured": true, 00:16:14.123 "data_offset": 2048, 00:16:14.123 "data_size": 63488 00:16:14.123 }, 00:16:14.123 { 00:16:14.123 "name": "BaseBdev3", 00:16:14.123 "uuid": "019df1bd-2f4e-57b4-9722-e150ab77ee01", 00:16:14.123 "is_configured": true, 00:16:14.123 "data_offset": 2048, 00:16:14.123 "data_size": 63488 00:16:14.123 } 00:16:14.123 ] 00:16:14.123 }' 00:16:14.123 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.123 13:16:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.691 13:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:14.691 [2024-07-25 13:16:25.176731] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.691 [2024-07-25 13:16:25.176763] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.951 [2024-07-25 13:16:25.179711] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.951 [2024-07-25 13:16:25.179744] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.951 [2024-07-25 13:16:25.179773] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.951 [2024-07-25 13:16:25.179783] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe99780 name raid_bdev1, state offline 00:16:14.951 0 00:16:14.951 13:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 880212 00:16:14.951 13:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 880212 ']' 00:16:14.951 13:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 880212 00:16:14.951 13:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:14.951 13:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.951 13:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 880212 00:16:14.951 13:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.951 13:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.951 13:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 880212' 00:16:14.951 killing process with pid 880212 00:16:14.951 13:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 880212 00:16:14.951 [2024-07-25 13:16:25.243784] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.951 13:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 880212 00:16:14.951 [2024-07-25 13:16:25.262128] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.211 13:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.nzp6b083FE 00:16:15.211 13:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:15.211 13:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:15.211 13:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:16:15.211 13:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:16:15.211 13:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:15.211 13:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:15.211 13:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:16:15.211 00:16:15.211 real 0m6.606s 00:16:15.211 user 0m10.440s 00:16:15.211 sys 0m1.132s 00:16:15.211 13:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:15.211 13:16:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.211 ************************************ 00:16:15.211 END TEST raid_read_error_test 00:16:15.211 ************************************ 00:16:15.211 13:16:25 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:16:15.211 13:16:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:15.211 13:16:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:15.211 13:16:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.211 ************************************ 00:16:15.211 START TEST raid_write_error_test 00:16:15.211 ************************************ 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.oJyFpeV751 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=881374 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 881374 /var/tmp/spdk-raid.sock 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 881374 ']' 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:15.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:15.211 13:16:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.211 [2024-07-25 13:16:25.611174] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:16:15.211 [2024-07-25 13:16:25.611229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881374 ] 00:16:15.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.211 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:15.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.211 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:15.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.211 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:15.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.211 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:15.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.211 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:15.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.211 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:15.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:15.212 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:15.472 [2024-07-25 13:16:25.742243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.472 [2024-07-25 13:16:25.827243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.472 [2024-07-25 13:16:25.891110] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.472 [2024-07-25 13:16:25.891164] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.041 13:16:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:16.041 13:16:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:16.041 13:16:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:16.041 13:16:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:16.300 BaseBdev1_malloc 00:16:16.300 13:16:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:16.559 true 00:16:16.559 13:16:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:16.818 [2024-07-25 13:16:27.184602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:16.818 [2024-07-25 13:16:27.184641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.818 [2024-07-25 13:16:27.184659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x212e1d0 00:16:16.818 [2024-07-25 13:16:27.184671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.818 [2024-07-25 13:16:27.186245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.818 [2024-07-25 13:16:27.186274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:16.818 BaseBdev1 00:16:16.818 13:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:16.818 13:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:17.077 BaseBdev2_malloc 00:16:17.077 13:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:17.337 true 00:16:17.337 13:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:17.596 [2024-07-25 13:16:27.866595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:17.596 [2024-07-25 13:16:27.866636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.596 [2024-07-25 13:16:27.866654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2131710 00:16:17.596 [2024-07-25 13:16:27.866665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.596 [2024-07-25 13:16:27.868048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.596 [2024-07-25 13:16:27.868077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:17.596 BaseBdev2 00:16:17.596 13:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:17.596 13:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:17.855 BaseBdev3_malloc 00:16:17.856 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:17.856 true 00:16:17.856 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:18.114 [2024-07-25 13:16:28.544691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:18.114 [2024-07-25 13:16:28.544732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.114 [2024-07-25 13:16:28.544752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2133de0 00:16:18.114 [2024-07-25 13:16:28.544764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.114 [2024-07-25 13:16:28.546136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.114 [2024-07-25 13:16:28.546170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:18.114 BaseBdev3 00:16:18.114 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:18.371 [2024-07-25 13:16:28.757285] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.371 [2024-07-25 13:16:28.758410] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.371 [2024-07-25 13:16:28.758472] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.371 [2024-07-25 13:16:28.758643] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2135780 00:16:18.371 [2024-07-25 13:16:28.758653] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:18.371 [2024-07-25 13:16:28.758832] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x213a180 00:16:18.371 [2024-07-25 13:16:28.758959] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2135780 00:16:18.371 [2024-07-25 13:16:28.758968] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2135780 00:16:18.371 [2024-07-25 13:16:28.759070] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.371 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:18.371 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:18.371 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:18.371 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:18.371 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:18.371 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:18.371 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.371 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.371 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.372 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.372 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.372 13:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.630 13:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.630 "name": "raid_bdev1", 00:16:18.630 "uuid": "60c1ab16-a3d4-4dbf-8e1a-d232ec337271", 00:16:18.630 "strip_size_kb": 64, 00:16:18.630 "state": "online", 00:16:18.630 "raid_level": "concat", 00:16:18.630 "superblock": true, 00:16:18.630 "num_base_bdevs": 3, 00:16:18.630 "num_base_bdevs_discovered": 3, 00:16:18.630 "num_base_bdevs_operational": 3, 00:16:18.630 "base_bdevs_list": [ 00:16:18.630 { 00:16:18.630 "name": "BaseBdev1", 00:16:18.630 "uuid": "79f5886e-bf8c-533d-8aad-ccd815a2a187", 00:16:18.630 "is_configured": true, 00:16:18.630 "data_offset": 2048, 00:16:18.630 "data_size": 63488 00:16:18.630 }, 00:16:18.630 { 00:16:18.630 "name": "BaseBdev2", 00:16:18.630 "uuid": "cc11481e-3939-5c8f-90d7-fc88c14cf418", 00:16:18.630 "is_configured": true, 00:16:18.630 "data_offset": 2048, 00:16:18.630 "data_size": 63488 00:16:18.630 }, 00:16:18.630 { 00:16:18.630 "name": "BaseBdev3", 00:16:18.630 "uuid": "d7396f75-2a39-5d92-aef8-caed10e7a2a0", 00:16:18.630 "is_configured": true, 00:16:18.630 "data_offset": 2048, 00:16:18.630 "data_size": 63488 00:16:18.630 } 00:16:18.630 ] 00:16:18.630 }' 00:16:18.630 13:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.630 13:16:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.196 13:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:19.196 13:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:19.197 [2024-07-25 13:16:29.652010] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2136ab0 00:16:20.133 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.394 13:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.653 13:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.653 "name": "raid_bdev1", 00:16:20.653 "uuid": "60c1ab16-a3d4-4dbf-8e1a-d232ec337271", 00:16:20.653 "strip_size_kb": 64, 00:16:20.653 "state": "online", 00:16:20.653 "raid_level": "concat", 00:16:20.653 "superblock": true, 00:16:20.653 "num_base_bdevs": 3, 00:16:20.653 "num_base_bdevs_discovered": 3, 00:16:20.653 "num_base_bdevs_operational": 3, 00:16:20.653 "base_bdevs_list": [ 00:16:20.653 { 00:16:20.653 "name": "BaseBdev1", 00:16:20.653 "uuid": "79f5886e-bf8c-533d-8aad-ccd815a2a187", 00:16:20.653 "is_configured": true, 00:16:20.653 "data_offset": 2048, 00:16:20.653 "data_size": 63488 00:16:20.653 }, 00:16:20.653 { 00:16:20.653 "name": "BaseBdev2", 00:16:20.653 "uuid": "cc11481e-3939-5c8f-90d7-fc88c14cf418", 00:16:20.653 "is_configured": true, 00:16:20.653 "data_offset": 2048, 00:16:20.653 "data_size": 63488 00:16:20.653 }, 00:16:20.653 { 00:16:20.653 "name": "BaseBdev3", 00:16:20.653 "uuid": "d7396f75-2a39-5d92-aef8-caed10e7a2a0", 00:16:20.653 "is_configured": true, 00:16:20.653 "data_offset": 2048, 00:16:20.653 "data_size": 63488 00:16:20.653 } 00:16:20.653 ] 00:16:20.653 }' 00:16:20.653 13:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.653 13:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.221 13:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:21.480 [2024-07-25 13:16:31.794046] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.480 [2024-07-25 13:16:31.794085] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.480 [2024-07-25 13:16:31.797040] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.480 [2024-07-25 13:16:31.797073] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.480 [2024-07-25 13:16:31.797102] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.480 [2024-07-25 13:16:31.797112] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2135780 name raid_bdev1, state offline 00:16:21.480 0 00:16:21.480 13:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 881374 00:16:21.480 13:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 881374 ']' 00:16:21.480 13:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 881374 00:16:21.480 13:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:21.480 13:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.480 13:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 881374 00:16:21.480 13:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.480 13:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.480 13:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 881374' 00:16:21.480 killing process with pid 881374 00:16:21.480 13:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 881374 00:16:21.480 [2024-07-25 13:16:31.871753] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.480 13:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 881374 00:16:21.480 [2024-07-25 13:16:31.889575] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.740 13:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.oJyFpeV751 00:16:21.740 13:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:21.740 13:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:21.740 13:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:16:21.740 13:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:16:21.740 13:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:21.740 13:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:21.740 13:16:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:16:21.740 00:16:21.740 real 0m6.547s 00:16:21.740 user 0m10.298s 00:16:21.740 sys 0m1.147s 00:16:21.740 13:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:21.740 13:16:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.740 ************************************ 00:16:21.740 END TEST raid_write_error_test 00:16:21.740 ************************************ 00:16:21.740 13:16:32 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:16:21.740 13:16:32 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:21.740 13:16:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:21.740 13:16:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:21.740 13:16:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.740 ************************************ 00:16:21.740 START TEST raid_state_function_test 00:16:21.740 ************************************ 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=882539 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 882539' 00:16:21.740 Process raid pid: 882539 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 882539 /var/tmp/spdk-raid.sock 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 882539 ']' 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:21.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.740 13:16:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.999 [2024-07-25 13:16:32.245824] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:16:21.999 [2024-07-25 13:16:32.245880] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:21.999 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:21.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:21.999 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:21.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:21.999 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:21.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:21.999 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:21.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:21.999 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:21.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:21.999 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:21.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:21.999 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:21.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:21.999 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:21.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:21.999 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:21.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:21.999 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:21.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:21.999 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:21.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:21.999 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:22.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:22.000 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:22.000 [2024-07-25 13:16:32.378926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.000 [2024-07-25 13:16:32.465279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.258 [2024-07-25 13:16:32.525389] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.258 [2024-07-25 13:16:32.525423] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:23.039 [2024-07-25 13:16:33.343506] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.039 [2024-07-25 13:16:33.343542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.039 [2024-07-25 13:16:33.343553] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.039 [2024-07-25 13:16:33.343564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.039 [2024-07-25 13:16:33.343572] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.039 [2024-07-25 13:16:33.343583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.039 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.299 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.299 "name": "Existed_Raid", 00:16:23.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.299 "strip_size_kb": 0, 00:16:23.299 "state": "configuring", 00:16:23.299 "raid_level": "raid1", 00:16:23.299 "superblock": false, 00:16:23.299 "num_base_bdevs": 3, 00:16:23.299 "num_base_bdevs_discovered": 0, 00:16:23.299 "num_base_bdevs_operational": 3, 00:16:23.299 "base_bdevs_list": [ 00:16:23.299 { 00:16:23.299 "name": "BaseBdev1", 00:16:23.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.299 "is_configured": false, 00:16:23.299 "data_offset": 0, 00:16:23.299 "data_size": 0 00:16:23.299 }, 00:16:23.299 { 00:16:23.299 "name": "BaseBdev2", 00:16:23.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.299 "is_configured": false, 00:16:23.299 "data_offset": 0, 00:16:23.299 "data_size": 0 00:16:23.299 }, 00:16:23.299 { 00:16:23.299 "name": "BaseBdev3", 00:16:23.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.299 "is_configured": false, 00:16:23.299 "data_offset": 0, 00:16:23.299 "data_size": 0 00:16:23.299 } 00:16:23.299 ] 00:16:23.299 }' 00:16:23.299 13:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.299 13:16:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.865 13:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:24.124 [2024-07-25 13:16:34.386193] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:24.124 [2024-07-25 13:16:34.386223] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11a0f40 name Existed_Raid, state configuring 00:16:24.124 13:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:24.383 [2024-07-25 13:16:34.614794] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.383 [2024-07-25 13:16:34.614824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.383 [2024-07-25 13:16:34.614834] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.383 [2024-07-25 13:16:34.614845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.383 [2024-07-25 13:16:34.614853] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.383 [2024-07-25 13:16:34.614863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.383 13:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:24.383 [2024-07-25 13:16:34.852890] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.383 BaseBdev1 00:16:24.383 13:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:24.383 13:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:24.383 13:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:24.383 13:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:24.383 13:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:24.383 13:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:24.383 13:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:24.643 13:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:24.902 [ 00:16:24.902 { 00:16:24.902 "name": "BaseBdev1", 00:16:24.902 "aliases": [ 00:16:24.902 "4580942d-052a-4e75-850b-18acccb9ab15" 00:16:24.902 ], 00:16:24.902 "product_name": "Malloc disk", 00:16:24.902 "block_size": 512, 00:16:24.902 "num_blocks": 65536, 00:16:24.902 "uuid": "4580942d-052a-4e75-850b-18acccb9ab15", 00:16:24.902 "assigned_rate_limits": { 00:16:24.902 "rw_ios_per_sec": 0, 00:16:24.902 "rw_mbytes_per_sec": 0, 00:16:24.902 "r_mbytes_per_sec": 0, 00:16:24.902 "w_mbytes_per_sec": 0 00:16:24.902 }, 00:16:24.902 "claimed": true, 00:16:24.902 "claim_type": "exclusive_write", 00:16:24.902 "zoned": false, 00:16:24.902 "supported_io_types": { 00:16:24.902 "read": true, 00:16:24.902 "write": true, 00:16:24.902 "unmap": true, 00:16:24.902 "flush": true, 00:16:24.902 "reset": true, 00:16:24.902 "nvme_admin": false, 00:16:24.902 "nvme_io": false, 00:16:24.902 "nvme_io_md": false, 00:16:24.902 "write_zeroes": true, 00:16:24.902 "zcopy": true, 00:16:24.902 "get_zone_info": false, 00:16:24.902 "zone_management": false, 00:16:24.902 "zone_append": false, 00:16:24.902 "compare": false, 00:16:24.902 "compare_and_write": false, 00:16:24.902 "abort": true, 00:16:24.902 "seek_hole": false, 00:16:24.902 "seek_data": false, 00:16:24.902 "copy": true, 00:16:24.902 "nvme_iov_md": false 00:16:24.902 }, 00:16:24.902 "memory_domains": [ 00:16:24.902 { 00:16:24.902 "dma_device_id": "system", 00:16:24.902 "dma_device_type": 1 00:16:24.902 }, 00:16:24.902 { 00:16:24.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.902 "dma_device_type": 2 00:16:24.902 } 00:16:24.902 ], 00:16:24.902 "driver_specific": {} 00:16:24.902 } 00:16:24.902 ] 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.902 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.161 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.161 "name": "Existed_Raid", 00:16:25.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.161 "strip_size_kb": 0, 00:16:25.161 "state": "configuring", 00:16:25.161 "raid_level": "raid1", 00:16:25.161 "superblock": false, 00:16:25.161 "num_base_bdevs": 3, 00:16:25.161 "num_base_bdevs_discovered": 1, 00:16:25.161 "num_base_bdevs_operational": 3, 00:16:25.161 "base_bdevs_list": [ 00:16:25.161 { 00:16:25.161 "name": "BaseBdev1", 00:16:25.161 "uuid": "4580942d-052a-4e75-850b-18acccb9ab15", 00:16:25.162 "is_configured": true, 00:16:25.162 "data_offset": 0, 00:16:25.162 "data_size": 65536 00:16:25.162 }, 00:16:25.162 { 00:16:25.162 "name": "BaseBdev2", 00:16:25.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.162 "is_configured": false, 00:16:25.162 "data_offset": 0, 00:16:25.162 "data_size": 0 00:16:25.162 }, 00:16:25.162 { 00:16:25.162 "name": "BaseBdev3", 00:16:25.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.162 "is_configured": false, 00:16:25.162 "data_offset": 0, 00:16:25.162 "data_size": 0 00:16:25.162 } 00:16:25.162 ] 00:16:25.162 }' 00:16:25.162 13:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.162 13:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.730 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:25.989 [2024-07-25 13:16:36.376917] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.989 [2024-07-25 13:16:36.376953] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11a0810 name Existed_Raid, state configuring 00:16:25.989 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:26.249 [2024-07-25 13:16:36.589506] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.249 [2024-07-25 13:16:36.590889] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.249 [2024-07-25 13:16:36.590921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.249 [2024-07-25 13:16:36.590930] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:26.249 [2024-07-25 13:16:36.590941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.249 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.558 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.558 "name": "Existed_Raid", 00:16:26.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.558 "strip_size_kb": 0, 00:16:26.558 "state": "configuring", 00:16:26.558 "raid_level": "raid1", 00:16:26.558 "superblock": false, 00:16:26.558 "num_base_bdevs": 3, 00:16:26.558 "num_base_bdevs_discovered": 1, 00:16:26.558 "num_base_bdevs_operational": 3, 00:16:26.558 "base_bdevs_list": [ 00:16:26.558 { 00:16:26.558 "name": "BaseBdev1", 00:16:26.558 "uuid": "4580942d-052a-4e75-850b-18acccb9ab15", 00:16:26.558 "is_configured": true, 00:16:26.558 "data_offset": 0, 00:16:26.558 "data_size": 65536 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "name": "BaseBdev2", 00:16:26.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.558 "is_configured": false, 00:16:26.558 "data_offset": 0, 00:16:26.558 "data_size": 0 00:16:26.558 }, 00:16:26.558 { 00:16:26.558 "name": "BaseBdev3", 00:16:26.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.558 "is_configured": false, 00:16:26.558 "data_offset": 0, 00:16:26.558 "data_size": 0 00:16:26.558 } 00:16:26.558 ] 00:16:26.558 }' 00:16:26.558 13:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.558 13:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.144 13:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:27.144 [2024-07-25 13:16:37.567243] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.144 BaseBdev2 00:16:27.144 13:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:27.144 13:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:27.144 13:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.144 13:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:27.144 13:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.144 13:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.144 13:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:27.403 13:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:27.663 [ 00:16:27.663 { 00:16:27.663 "name": "BaseBdev2", 00:16:27.663 "aliases": [ 00:16:27.663 "d383a404-347a-4de8-85d2-0230cf609770" 00:16:27.663 ], 00:16:27.663 "product_name": "Malloc disk", 00:16:27.663 "block_size": 512, 00:16:27.663 "num_blocks": 65536, 00:16:27.663 "uuid": "d383a404-347a-4de8-85d2-0230cf609770", 00:16:27.663 "assigned_rate_limits": { 00:16:27.663 "rw_ios_per_sec": 0, 00:16:27.663 "rw_mbytes_per_sec": 0, 00:16:27.663 "r_mbytes_per_sec": 0, 00:16:27.663 "w_mbytes_per_sec": 0 00:16:27.663 }, 00:16:27.663 "claimed": true, 00:16:27.663 "claim_type": "exclusive_write", 00:16:27.663 "zoned": false, 00:16:27.663 "supported_io_types": { 00:16:27.663 "read": true, 00:16:27.663 "write": true, 00:16:27.663 "unmap": true, 00:16:27.663 "flush": true, 00:16:27.663 "reset": true, 00:16:27.663 "nvme_admin": false, 00:16:27.663 "nvme_io": false, 00:16:27.663 "nvme_io_md": false, 00:16:27.663 "write_zeroes": true, 00:16:27.663 "zcopy": true, 00:16:27.663 "get_zone_info": false, 00:16:27.663 "zone_management": false, 00:16:27.663 "zone_append": false, 00:16:27.663 "compare": false, 00:16:27.663 "compare_and_write": false, 00:16:27.663 "abort": true, 00:16:27.663 "seek_hole": false, 00:16:27.663 "seek_data": false, 00:16:27.663 "copy": true, 00:16:27.663 "nvme_iov_md": false 00:16:27.663 }, 00:16:27.663 "memory_domains": [ 00:16:27.663 { 00:16:27.663 "dma_device_id": "system", 00:16:27.663 "dma_device_type": 1 00:16:27.663 }, 00:16:27.663 { 00:16:27.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.663 "dma_device_type": 2 00:16:27.663 } 00:16:27.663 ], 00:16:27.663 "driver_specific": {} 00:16:27.663 } 00:16:27.663 ] 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.663 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.922 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.922 "name": "Existed_Raid", 00:16:27.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.922 "strip_size_kb": 0, 00:16:27.922 "state": "configuring", 00:16:27.922 "raid_level": "raid1", 00:16:27.922 "superblock": false, 00:16:27.922 "num_base_bdevs": 3, 00:16:27.922 "num_base_bdevs_discovered": 2, 00:16:27.922 "num_base_bdevs_operational": 3, 00:16:27.922 "base_bdevs_list": [ 00:16:27.922 { 00:16:27.922 "name": "BaseBdev1", 00:16:27.923 "uuid": "4580942d-052a-4e75-850b-18acccb9ab15", 00:16:27.923 "is_configured": true, 00:16:27.923 "data_offset": 0, 00:16:27.923 "data_size": 65536 00:16:27.923 }, 00:16:27.923 { 00:16:27.923 "name": "BaseBdev2", 00:16:27.923 "uuid": "d383a404-347a-4de8-85d2-0230cf609770", 00:16:27.923 "is_configured": true, 00:16:27.923 "data_offset": 0, 00:16:27.923 "data_size": 65536 00:16:27.923 }, 00:16:27.923 { 00:16:27.923 "name": "BaseBdev3", 00:16:27.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.923 "is_configured": false, 00:16:27.923 "data_offset": 0, 00:16:27.923 "data_size": 0 00:16:27.923 } 00:16:27.923 ] 00:16:27.923 }' 00:16:27.923 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.923 13:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.490 13:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:28.749 [2024-07-25 13:16:39.010233] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.749 [2024-07-25 13:16:39.010269] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x11a1710 00:16:28.749 [2024-07-25 13:16:39.010277] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:28.749 [2024-07-25 13:16:39.010452] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1199400 00:16:28.749 [2024-07-25 13:16:39.010564] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11a1710 00:16:28.750 [2024-07-25 13:16:39.010574] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x11a1710 00:16:28.750 [2024-07-25 13:16:39.010722] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.750 BaseBdev3 00:16:28.750 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:28.750 13:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:28.750 13:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:28.750 13:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:28.750 13:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:28.750 13:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:28.750 13:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:29.009 [ 00:16:29.009 { 00:16:29.009 "name": "BaseBdev3", 00:16:29.009 "aliases": [ 00:16:29.009 "a6cd6e21-ad2f-4e27-a446-51f635d829f6" 00:16:29.009 ], 00:16:29.009 "product_name": "Malloc disk", 00:16:29.009 "block_size": 512, 00:16:29.009 "num_blocks": 65536, 00:16:29.009 "uuid": "a6cd6e21-ad2f-4e27-a446-51f635d829f6", 00:16:29.009 "assigned_rate_limits": { 00:16:29.009 "rw_ios_per_sec": 0, 00:16:29.009 "rw_mbytes_per_sec": 0, 00:16:29.009 "r_mbytes_per_sec": 0, 00:16:29.009 "w_mbytes_per_sec": 0 00:16:29.009 }, 00:16:29.009 "claimed": true, 00:16:29.009 "claim_type": "exclusive_write", 00:16:29.009 "zoned": false, 00:16:29.009 "supported_io_types": { 00:16:29.009 "read": true, 00:16:29.009 "write": true, 00:16:29.009 "unmap": true, 00:16:29.009 "flush": true, 00:16:29.009 "reset": true, 00:16:29.009 "nvme_admin": false, 00:16:29.009 "nvme_io": false, 00:16:29.009 "nvme_io_md": false, 00:16:29.009 "write_zeroes": true, 00:16:29.009 "zcopy": true, 00:16:29.009 "get_zone_info": false, 00:16:29.009 "zone_management": false, 00:16:29.009 "zone_append": false, 00:16:29.009 "compare": false, 00:16:29.009 "compare_and_write": false, 00:16:29.009 "abort": true, 00:16:29.009 "seek_hole": false, 00:16:29.009 "seek_data": false, 00:16:29.009 "copy": true, 00:16:29.009 "nvme_iov_md": false 00:16:29.009 }, 00:16:29.009 "memory_domains": [ 00:16:29.009 { 00:16:29.009 "dma_device_id": "system", 00:16:29.009 "dma_device_type": 1 00:16:29.009 }, 00:16:29.009 { 00:16:29.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.009 "dma_device_type": 2 00:16:29.009 } 00:16:29.009 ], 00:16:29.009 "driver_specific": {} 00:16:29.009 } 00:16:29.009 ] 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.009 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.267 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.267 "name": "Existed_Raid", 00:16:29.267 "uuid": "b8610442-72cc-4b66-9704-cfcfc5878b0e", 00:16:29.268 "strip_size_kb": 0, 00:16:29.268 "state": "online", 00:16:29.268 "raid_level": "raid1", 00:16:29.268 "superblock": false, 00:16:29.268 "num_base_bdevs": 3, 00:16:29.268 "num_base_bdevs_discovered": 3, 00:16:29.268 "num_base_bdevs_operational": 3, 00:16:29.268 "base_bdevs_list": [ 00:16:29.268 { 00:16:29.268 "name": "BaseBdev1", 00:16:29.268 "uuid": "4580942d-052a-4e75-850b-18acccb9ab15", 00:16:29.268 "is_configured": true, 00:16:29.268 "data_offset": 0, 00:16:29.268 "data_size": 65536 00:16:29.268 }, 00:16:29.268 { 00:16:29.268 "name": "BaseBdev2", 00:16:29.268 "uuid": "d383a404-347a-4de8-85d2-0230cf609770", 00:16:29.268 "is_configured": true, 00:16:29.268 "data_offset": 0, 00:16:29.268 "data_size": 65536 00:16:29.268 }, 00:16:29.268 { 00:16:29.268 "name": "BaseBdev3", 00:16:29.268 "uuid": "a6cd6e21-ad2f-4e27-a446-51f635d829f6", 00:16:29.268 "is_configured": true, 00:16:29.268 "data_offset": 0, 00:16:29.268 "data_size": 65536 00:16:29.268 } 00:16:29.268 ] 00:16:29.268 }' 00:16:29.268 13:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.268 13:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.835 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:29.835 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:29.836 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:29.836 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:29.836 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:29.836 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:29.836 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:29.836 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:30.095 [2024-07-25 13:16:40.498485] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.095 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:30.095 "name": "Existed_Raid", 00:16:30.095 "aliases": [ 00:16:30.095 "b8610442-72cc-4b66-9704-cfcfc5878b0e" 00:16:30.095 ], 00:16:30.095 "product_name": "Raid Volume", 00:16:30.095 "block_size": 512, 00:16:30.095 "num_blocks": 65536, 00:16:30.095 "uuid": "b8610442-72cc-4b66-9704-cfcfc5878b0e", 00:16:30.095 "assigned_rate_limits": { 00:16:30.095 "rw_ios_per_sec": 0, 00:16:30.095 "rw_mbytes_per_sec": 0, 00:16:30.095 "r_mbytes_per_sec": 0, 00:16:30.095 "w_mbytes_per_sec": 0 00:16:30.095 }, 00:16:30.095 "claimed": false, 00:16:30.095 "zoned": false, 00:16:30.095 "supported_io_types": { 00:16:30.095 "read": true, 00:16:30.095 "write": true, 00:16:30.095 "unmap": false, 00:16:30.095 "flush": false, 00:16:30.095 "reset": true, 00:16:30.095 "nvme_admin": false, 00:16:30.095 "nvme_io": false, 00:16:30.095 "nvme_io_md": false, 00:16:30.095 "write_zeroes": true, 00:16:30.095 "zcopy": false, 00:16:30.095 "get_zone_info": false, 00:16:30.095 "zone_management": false, 00:16:30.095 "zone_append": false, 00:16:30.095 "compare": false, 00:16:30.095 "compare_and_write": false, 00:16:30.095 "abort": false, 00:16:30.095 "seek_hole": false, 00:16:30.095 "seek_data": false, 00:16:30.095 "copy": false, 00:16:30.095 "nvme_iov_md": false 00:16:30.095 }, 00:16:30.095 "memory_domains": [ 00:16:30.095 { 00:16:30.095 "dma_device_id": "system", 00:16:30.095 "dma_device_type": 1 00:16:30.095 }, 00:16:30.095 { 00:16:30.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.095 "dma_device_type": 2 00:16:30.095 }, 00:16:30.095 { 00:16:30.095 "dma_device_id": "system", 00:16:30.095 "dma_device_type": 1 00:16:30.095 }, 00:16:30.095 { 00:16:30.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.095 "dma_device_type": 2 00:16:30.095 }, 00:16:30.095 { 00:16:30.095 "dma_device_id": "system", 00:16:30.095 "dma_device_type": 1 00:16:30.095 }, 00:16:30.095 { 00:16:30.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.095 "dma_device_type": 2 00:16:30.095 } 00:16:30.095 ], 00:16:30.095 "driver_specific": { 00:16:30.095 "raid": { 00:16:30.095 "uuid": "b8610442-72cc-4b66-9704-cfcfc5878b0e", 00:16:30.095 "strip_size_kb": 0, 00:16:30.095 "state": "online", 00:16:30.095 "raid_level": "raid1", 00:16:30.095 "superblock": false, 00:16:30.095 "num_base_bdevs": 3, 00:16:30.095 "num_base_bdevs_discovered": 3, 00:16:30.095 "num_base_bdevs_operational": 3, 00:16:30.095 "base_bdevs_list": [ 00:16:30.095 { 00:16:30.095 "name": "BaseBdev1", 00:16:30.095 "uuid": "4580942d-052a-4e75-850b-18acccb9ab15", 00:16:30.095 "is_configured": true, 00:16:30.095 "data_offset": 0, 00:16:30.095 "data_size": 65536 00:16:30.095 }, 00:16:30.095 { 00:16:30.095 "name": "BaseBdev2", 00:16:30.095 "uuid": "d383a404-347a-4de8-85d2-0230cf609770", 00:16:30.095 "is_configured": true, 00:16:30.095 "data_offset": 0, 00:16:30.095 "data_size": 65536 00:16:30.095 }, 00:16:30.095 { 00:16:30.095 "name": "BaseBdev3", 00:16:30.095 "uuid": "a6cd6e21-ad2f-4e27-a446-51f635d829f6", 00:16:30.095 "is_configured": true, 00:16:30.095 "data_offset": 0, 00:16:30.095 "data_size": 65536 00:16:30.095 } 00:16:30.095 ] 00:16:30.095 } 00:16:30.095 } 00:16:30.095 }' 00:16:30.095 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.095 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:30.095 BaseBdev2 00:16:30.095 BaseBdev3' 00:16:30.095 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:30.095 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:30.095 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:30.355 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:30.355 "name": "BaseBdev1", 00:16:30.355 "aliases": [ 00:16:30.355 "4580942d-052a-4e75-850b-18acccb9ab15" 00:16:30.355 ], 00:16:30.355 "product_name": "Malloc disk", 00:16:30.355 "block_size": 512, 00:16:30.355 "num_blocks": 65536, 00:16:30.355 "uuid": "4580942d-052a-4e75-850b-18acccb9ab15", 00:16:30.355 "assigned_rate_limits": { 00:16:30.355 "rw_ios_per_sec": 0, 00:16:30.355 "rw_mbytes_per_sec": 0, 00:16:30.355 "r_mbytes_per_sec": 0, 00:16:30.355 "w_mbytes_per_sec": 0 00:16:30.355 }, 00:16:30.355 "claimed": true, 00:16:30.355 "claim_type": "exclusive_write", 00:16:30.355 "zoned": false, 00:16:30.355 "supported_io_types": { 00:16:30.355 "read": true, 00:16:30.355 "write": true, 00:16:30.355 "unmap": true, 00:16:30.355 "flush": true, 00:16:30.355 "reset": true, 00:16:30.355 "nvme_admin": false, 00:16:30.355 "nvme_io": false, 00:16:30.355 "nvme_io_md": false, 00:16:30.355 "write_zeroes": true, 00:16:30.355 "zcopy": true, 00:16:30.355 "get_zone_info": false, 00:16:30.355 "zone_management": false, 00:16:30.355 "zone_append": false, 00:16:30.355 "compare": false, 00:16:30.355 "compare_and_write": false, 00:16:30.355 "abort": true, 00:16:30.355 "seek_hole": false, 00:16:30.355 "seek_data": false, 00:16:30.355 "copy": true, 00:16:30.355 "nvme_iov_md": false 00:16:30.355 }, 00:16:30.355 "memory_domains": [ 00:16:30.355 { 00:16:30.355 "dma_device_id": "system", 00:16:30.355 "dma_device_type": 1 00:16:30.355 }, 00:16:30.355 { 00:16:30.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.355 "dma_device_type": 2 00:16:30.355 } 00:16:30.355 ], 00:16:30.355 "driver_specific": {} 00:16:30.355 }' 00:16:30.355 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.355 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.614 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:30.614 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.615 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.615 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:30.615 13:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.615 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.615 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:30.615 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.615 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.874 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:30.874 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:30.874 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:30.874 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.134 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.134 "name": "BaseBdev2", 00:16:31.134 "aliases": [ 00:16:31.134 "d383a404-347a-4de8-85d2-0230cf609770" 00:16:31.134 ], 00:16:31.134 "product_name": "Malloc disk", 00:16:31.134 "block_size": 512, 00:16:31.134 "num_blocks": 65536, 00:16:31.134 "uuid": "d383a404-347a-4de8-85d2-0230cf609770", 00:16:31.134 "assigned_rate_limits": { 00:16:31.134 "rw_ios_per_sec": 0, 00:16:31.134 "rw_mbytes_per_sec": 0, 00:16:31.134 "r_mbytes_per_sec": 0, 00:16:31.134 "w_mbytes_per_sec": 0 00:16:31.134 }, 00:16:31.134 "claimed": true, 00:16:31.134 "claim_type": "exclusive_write", 00:16:31.134 "zoned": false, 00:16:31.134 "supported_io_types": { 00:16:31.134 "read": true, 00:16:31.134 "write": true, 00:16:31.134 "unmap": true, 00:16:31.134 "flush": true, 00:16:31.134 "reset": true, 00:16:31.134 "nvme_admin": false, 00:16:31.134 "nvme_io": false, 00:16:31.134 "nvme_io_md": false, 00:16:31.134 "write_zeroes": true, 00:16:31.134 "zcopy": true, 00:16:31.134 "get_zone_info": false, 00:16:31.134 "zone_management": false, 00:16:31.134 "zone_append": false, 00:16:31.134 "compare": false, 00:16:31.134 "compare_and_write": false, 00:16:31.134 "abort": true, 00:16:31.134 "seek_hole": false, 00:16:31.134 "seek_data": false, 00:16:31.134 "copy": true, 00:16:31.134 "nvme_iov_md": false 00:16:31.134 }, 00:16:31.134 "memory_domains": [ 00:16:31.134 { 00:16:31.134 "dma_device_id": "system", 00:16:31.134 "dma_device_type": 1 00:16:31.134 }, 00:16:31.134 { 00:16:31.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.134 "dma_device_type": 2 00:16:31.134 } 00:16:31.134 ], 00:16:31.134 "driver_specific": {} 00:16:31.134 }' 00:16:31.134 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.134 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.134 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:31.134 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.134 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.134 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.134 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.134 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.393 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.393 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.393 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.393 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.393 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.393 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:31.393 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.652 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.652 "name": "BaseBdev3", 00:16:31.652 "aliases": [ 00:16:31.652 "a6cd6e21-ad2f-4e27-a446-51f635d829f6" 00:16:31.652 ], 00:16:31.652 "product_name": "Malloc disk", 00:16:31.652 "block_size": 512, 00:16:31.652 "num_blocks": 65536, 00:16:31.652 "uuid": "a6cd6e21-ad2f-4e27-a446-51f635d829f6", 00:16:31.652 "assigned_rate_limits": { 00:16:31.652 "rw_ios_per_sec": 0, 00:16:31.652 "rw_mbytes_per_sec": 0, 00:16:31.652 "r_mbytes_per_sec": 0, 00:16:31.652 "w_mbytes_per_sec": 0 00:16:31.652 }, 00:16:31.652 "claimed": true, 00:16:31.652 "claim_type": "exclusive_write", 00:16:31.652 "zoned": false, 00:16:31.652 "supported_io_types": { 00:16:31.652 "read": true, 00:16:31.652 "write": true, 00:16:31.652 "unmap": true, 00:16:31.652 "flush": true, 00:16:31.652 "reset": true, 00:16:31.652 "nvme_admin": false, 00:16:31.652 "nvme_io": false, 00:16:31.652 "nvme_io_md": false, 00:16:31.652 "write_zeroes": true, 00:16:31.652 "zcopy": true, 00:16:31.652 "get_zone_info": false, 00:16:31.652 "zone_management": false, 00:16:31.652 "zone_append": false, 00:16:31.652 "compare": false, 00:16:31.652 "compare_and_write": false, 00:16:31.652 "abort": true, 00:16:31.652 "seek_hole": false, 00:16:31.652 "seek_data": false, 00:16:31.652 "copy": true, 00:16:31.652 "nvme_iov_md": false 00:16:31.652 }, 00:16:31.652 "memory_domains": [ 00:16:31.652 { 00:16:31.652 "dma_device_id": "system", 00:16:31.652 "dma_device_type": 1 00:16:31.652 }, 00:16:31.652 { 00:16:31.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.652 "dma_device_type": 2 00:16:31.652 } 00:16:31.652 ], 00:16:31.652 "driver_specific": {} 00:16:31.652 }' 00:16:31.652 13:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.652 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.652 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:31.652 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.652 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.652 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.652 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.911 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.911 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.911 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.911 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.911 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.911 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:32.170 [2024-07-25 13:16:42.487486] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.170 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.429 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.429 "name": "Existed_Raid", 00:16:32.429 "uuid": "b8610442-72cc-4b66-9704-cfcfc5878b0e", 00:16:32.429 "strip_size_kb": 0, 00:16:32.429 "state": "online", 00:16:32.429 "raid_level": "raid1", 00:16:32.429 "superblock": false, 00:16:32.429 "num_base_bdevs": 3, 00:16:32.429 "num_base_bdevs_discovered": 2, 00:16:32.429 "num_base_bdevs_operational": 2, 00:16:32.429 "base_bdevs_list": [ 00:16:32.429 { 00:16:32.429 "name": null, 00:16:32.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.429 "is_configured": false, 00:16:32.429 "data_offset": 0, 00:16:32.429 "data_size": 65536 00:16:32.429 }, 00:16:32.429 { 00:16:32.429 "name": "BaseBdev2", 00:16:32.429 "uuid": "d383a404-347a-4de8-85d2-0230cf609770", 00:16:32.430 "is_configured": true, 00:16:32.430 "data_offset": 0, 00:16:32.430 "data_size": 65536 00:16:32.430 }, 00:16:32.430 { 00:16:32.430 "name": "BaseBdev3", 00:16:32.430 "uuid": "a6cd6e21-ad2f-4e27-a446-51f635d829f6", 00:16:32.430 "is_configured": true, 00:16:32.430 "data_offset": 0, 00:16:32.430 "data_size": 65536 00:16:32.430 } 00:16:32.430 ] 00:16:32.430 }' 00:16:32.430 13:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.430 13:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.007 13:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:33.007 13:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:33.007 13:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.007 13:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:33.268 13:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:33.268 13:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.268 13:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:33.268 [2024-07-25 13:16:43.755855] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:33.527 13:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:33.527 13:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:33.527 13:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.527 13:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:33.527 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:33.527 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.787 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:33.787 [2024-07-25 13:16:44.223079] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:33.787 [2024-07-25 13:16:44.223148] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.787 [2024-07-25 13:16:44.233498] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.787 [2024-07-25 13:16:44.233526] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.787 [2024-07-25 13:16:44.233536] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11a1710 name Existed_Raid, state offline 00:16:33.787 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:33.787 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:33.787 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.787 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:34.046 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:34.046 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:34.046 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:34.046 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:34.046 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:34.046 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:34.306 BaseBdev2 00:16:34.306 13:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:34.306 13:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:34.306 13:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:34.306 13:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:34.306 13:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:34.306 13:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:34.306 13:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:34.565 13:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:34.825 [ 00:16:34.825 { 00:16:34.825 "name": "BaseBdev2", 00:16:34.825 "aliases": [ 00:16:34.825 "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de" 00:16:34.825 ], 00:16:34.825 "product_name": "Malloc disk", 00:16:34.825 "block_size": 512, 00:16:34.825 "num_blocks": 65536, 00:16:34.825 "uuid": "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de", 00:16:34.825 "assigned_rate_limits": { 00:16:34.825 "rw_ios_per_sec": 0, 00:16:34.825 "rw_mbytes_per_sec": 0, 00:16:34.825 "r_mbytes_per_sec": 0, 00:16:34.825 "w_mbytes_per_sec": 0 00:16:34.825 }, 00:16:34.825 "claimed": false, 00:16:34.825 "zoned": false, 00:16:34.825 "supported_io_types": { 00:16:34.825 "read": true, 00:16:34.825 "write": true, 00:16:34.825 "unmap": true, 00:16:34.825 "flush": true, 00:16:34.825 "reset": true, 00:16:34.825 "nvme_admin": false, 00:16:34.825 "nvme_io": false, 00:16:34.825 "nvme_io_md": false, 00:16:34.825 "write_zeroes": true, 00:16:34.825 "zcopy": true, 00:16:34.825 "get_zone_info": false, 00:16:34.825 "zone_management": false, 00:16:34.825 "zone_append": false, 00:16:34.825 "compare": false, 00:16:34.825 "compare_and_write": false, 00:16:34.825 "abort": true, 00:16:34.825 "seek_hole": false, 00:16:34.825 "seek_data": false, 00:16:34.825 "copy": true, 00:16:34.825 "nvme_iov_md": false 00:16:34.825 }, 00:16:34.825 "memory_domains": [ 00:16:34.825 { 00:16:34.825 "dma_device_id": "system", 00:16:34.825 "dma_device_type": 1 00:16:34.825 }, 00:16:34.825 { 00:16:34.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.825 "dma_device_type": 2 00:16:34.825 } 00:16:34.825 ], 00:16:34.825 "driver_specific": {} 00:16:34.825 } 00:16:34.825 ] 00:16:34.825 13:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:34.825 13:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:34.825 13:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:34.825 13:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:35.084 BaseBdev3 00:16:35.084 13:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:35.084 13:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:35.084 13:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:35.084 13:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:35.084 13:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:35.084 13:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:35.084 13:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.345 13:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:35.345 [ 00:16:35.345 { 00:16:35.345 "name": "BaseBdev3", 00:16:35.345 "aliases": [ 00:16:35.345 "564b1237-e11f-4000-bf60-da5d4e58dd01" 00:16:35.345 ], 00:16:35.345 "product_name": "Malloc disk", 00:16:35.345 "block_size": 512, 00:16:35.345 "num_blocks": 65536, 00:16:35.345 "uuid": "564b1237-e11f-4000-bf60-da5d4e58dd01", 00:16:35.345 "assigned_rate_limits": { 00:16:35.345 "rw_ios_per_sec": 0, 00:16:35.345 "rw_mbytes_per_sec": 0, 00:16:35.345 "r_mbytes_per_sec": 0, 00:16:35.345 "w_mbytes_per_sec": 0 00:16:35.345 }, 00:16:35.345 "claimed": false, 00:16:35.345 "zoned": false, 00:16:35.345 "supported_io_types": { 00:16:35.345 "read": true, 00:16:35.345 "write": true, 00:16:35.345 "unmap": true, 00:16:35.345 "flush": true, 00:16:35.345 "reset": true, 00:16:35.345 "nvme_admin": false, 00:16:35.345 "nvme_io": false, 00:16:35.345 "nvme_io_md": false, 00:16:35.345 "write_zeroes": true, 00:16:35.345 "zcopy": true, 00:16:35.345 "get_zone_info": false, 00:16:35.345 "zone_management": false, 00:16:35.345 "zone_append": false, 00:16:35.345 "compare": false, 00:16:35.345 "compare_and_write": false, 00:16:35.345 "abort": true, 00:16:35.345 "seek_hole": false, 00:16:35.345 "seek_data": false, 00:16:35.345 "copy": true, 00:16:35.345 "nvme_iov_md": false 00:16:35.345 }, 00:16:35.345 "memory_domains": [ 00:16:35.345 { 00:16:35.345 "dma_device_id": "system", 00:16:35.345 "dma_device_type": 1 00:16:35.345 }, 00:16:35.345 { 00:16:35.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.345 "dma_device_type": 2 00:16:35.345 } 00:16:35.345 ], 00:16:35.345 "driver_specific": {} 00:16:35.345 } 00:16:35.345 ] 00:16:35.604 13:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:35.604 13:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:35.604 13:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:35.604 13:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:35.604 [2024-07-25 13:16:46.066517] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:35.604 [2024-07-25 13:16:46.066556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:35.604 [2024-07-25 13:16:46.066574] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.604 [2024-07-25 13:16:46.067792] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.864 "name": "Existed_Raid", 00:16:35.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.864 "strip_size_kb": 0, 00:16:35.864 "state": "configuring", 00:16:35.864 "raid_level": "raid1", 00:16:35.864 "superblock": false, 00:16:35.864 "num_base_bdevs": 3, 00:16:35.864 "num_base_bdevs_discovered": 2, 00:16:35.864 "num_base_bdevs_operational": 3, 00:16:35.864 "base_bdevs_list": [ 00:16:35.864 { 00:16:35.864 "name": "BaseBdev1", 00:16:35.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.864 "is_configured": false, 00:16:35.864 "data_offset": 0, 00:16:35.864 "data_size": 0 00:16:35.864 }, 00:16:35.864 { 00:16:35.864 "name": "BaseBdev2", 00:16:35.864 "uuid": "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de", 00:16:35.864 "is_configured": true, 00:16:35.864 "data_offset": 0, 00:16:35.864 "data_size": 65536 00:16:35.864 }, 00:16:35.864 { 00:16:35.864 "name": "BaseBdev3", 00:16:35.864 "uuid": "564b1237-e11f-4000-bf60-da5d4e58dd01", 00:16:35.864 "is_configured": true, 00:16:35.864 "data_offset": 0, 00:16:35.864 "data_size": 65536 00:16:35.864 } 00:16:35.864 ] 00:16:35.864 }' 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.864 13:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.433 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:36.692 [2024-07-25 13:16:47.073327] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:36.692 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:36.692 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.692 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.692 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:36.692 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:36.692 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:36.692 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.692 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.692 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.692 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.692 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.692 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.952 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.952 "name": "Existed_Raid", 00:16:36.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.952 "strip_size_kb": 0, 00:16:36.952 "state": "configuring", 00:16:36.952 "raid_level": "raid1", 00:16:36.952 "superblock": false, 00:16:36.952 "num_base_bdevs": 3, 00:16:36.952 "num_base_bdevs_discovered": 1, 00:16:36.952 "num_base_bdevs_operational": 3, 00:16:36.952 "base_bdevs_list": [ 00:16:36.952 { 00:16:36.952 "name": "BaseBdev1", 00:16:36.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.952 "is_configured": false, 00:16:36.952 "data_offset": 0, 00:16:36.952 "data_size": 0 00:16:36.952 }, 00:16:36.952 { 00:16:36.952 "name": null, 00:16:36.952 "uuid": "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de", 00:16:36.952 "is_configured": false, 00:16:36.952 "data_offset": 0, 00:16:36.952 "data_size": 65536 00:16:36.952 }, 00:16:36.952 { 00:16:36.952 "name": "BaseBdev3", 00:16:36.952 "uuid": "564b1237-e11f-4000-bf60-da5d4e58dd01", 00:16:36.952 "is_configured": true, 00:16:36.952 "data_offset": 0, 00:16:36.952 "data_size": 65536 00:16:36.952 } 00:16:36.952 ] 00:16:36.952 }' 00:16:36.952 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.952 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.521 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.521 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:37.780 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:37.780 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:37.780 [2024-07-25 13:16:48.187449] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.780 BaseBdev1 00:16:37.780 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:37.780 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:37.780 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:37.780 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:37.780 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:37.780 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:37.780 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.040 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:38.300 [ 00:16:38.300 { 00:16:38.300 "name": "BaseBdev1", 00:16:38.300 "aliases": [ 00:16:38.300 "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867" 00:16:38.300 ], 00:16:38.300 "product_name": "Malloc disk", 00:16:38.300 "block_size": 512, 00:16:38.300 "num_blocks": 65536, 00:16:38.300 "uuid": "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867", 00:16:38.300 "assigned_rate_limits": { 00:16:38.300 "rw_ios_per_sec": 0, 00:16:38.300 "rw_mbytes_per_sec": 0, 00:16:38.300 "r_mbytes_per_sec": 0, 00:16:38.300 "w_mbytes_per_sec": 0 00:16:38.300 }, 00:16:38.300 "claimed": true, 00:16:38.300 "claim_type": "exclusive_write", 00:16:38.300 "zoned": false, 00:16:38.300 "supported_io_types": { 00:16:38.300 "read": true, 00:16:38.300 "write": true, 00:16:38.300 "unmap": true, 00:16:38.300 "flush": true, 00:16:38.300 "reset": true, 00:16:38.300 "nvme_admin": false, 00:16:38.300 "nvme_io": false, 00:16:38.300 "nvme_io_md": false, 00:16:38.300 "write_zeroes": true, 00:16:38.300 "zcopy": true, 00:16:38.300 "get_zone_info": false, 00:16:38.300 "zone_management": false, 00:16:38.300 "zone_append": false, 00:16:38.300 "compare": false, 00:16:38.300 "compare_and_write": false, 00:16:38.300 "abort": true, 00:16:38.300 "seek_hole": false, 00:16:38.300 "seek_data": false, 00:16:38.300 "copy": true, 00:16:38.300 "nvme_iov_md": false 00:16:38.300 }, 00:16:38.300 "memory_domains": [ 00:16:38.300 { 00:16:38.300 "dma_device_id": "system", 00:16:38.300 "dma_device_type": 1 00:16:38.300 }, 00:16:38.300 { 00:16:38.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.300 "dma_device_type": 2 00:16:38.300 } 00:16:38.300 ], 00:16:38.300 "driver_specific": {} 00:16:38.300 } 00:16:38.300 ] 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.300 "name": "Existed_Raid", 00:16:38.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.300 "strip_size_kb": 0, 00:16:38.300 "state": "configuring", 00:16:38.300 "raid_level": "raid1", 00:16:38.300 "superblock": false, 00:16:38.300 "num_base_bdevs": 3, 00:16:38.300 "num_base_bdevs_discovered": 2, 00:16:38.300 "num_base_bdevs_operational": 3, 00:16:38.300 "base_bdevs_list": [ 00:16:38.300 { 00:16:38.300 "name": "BaseBdev1", 00:16:38.300 "uuid": "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867", 00:16:38.300 "is_configured": true, 00:16:38.300 "data_offset": 0, 00:16:38.300 "data_size": 65536 00:16:38.300 }, 00:16:38.300 { 00:16:38.300 "name": null, 00:16:38.300 "uuid": "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de", 00:16:38.300 "is_configured": false, 00:16:38.300 "data_offset": 0, 00:16:38.300 "data_size": 65536 00:16:38.300 }, 00:16:38.300 { 00:16:38.300 "name": "BaseBdev3", 00:16:38.300 "uuid": "564b1237-e11f-4000-bf60-da5d4e58dd01", 00:16:38.300 "is_configured": true, 00:16:38.300 "data_offset": 0, 00:16:38.300 "data_size": 65536 00:16:38.300 } 00:16:38.300 ] 00:16:38.300 }' 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.300 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.869 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.869 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:39.129 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:39.129 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:39.388 [2024-07-25 13:16:49.691418] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:39.388 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:39.388 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.388 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:39.388 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:39.388 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:39.388 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:39.388 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.388 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.388 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.388 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.388 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.388 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.651 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.651 "name": "Existed_Raid", 00:16:39.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.651 "strip_size_kb": 0, 00:16:39.651 "state": "configuring", 00:16:39.651 "raid_level": "raid1", 00:16:39.651 "superblock": false, 00:16:39.651 "num_base_bdevs": 3, 00:16:39.651 "num_base_bdevs_discovered": 1, 00:16:39.651 "num_base_bdevs_operational": 3, 00:16:39.652 "base_bdevs_list": [ 00:16:39.652 { 00:16:39.652 "name": "BaseBdev1", 00:16:39.652 "uuid": "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867", 00:16:39.652 "is_configured": true, 00:16:39.652 "data_offset": 0, 00:16:39.652 "data_size": 65536 00:16:39.652 }, 00:16:39.652 { 00:16:39.652 "name": null, 00:16:39.652 "uuid": "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de", 00:16:39.652 "is_configured": false, 00:16:39.652 "data_offset": 0, 00:16:39.652 "data_size": 65536 00:16:39.652 }, 00:16:39.652 { 00:16:39.652 "name": null, 00:16:39.652 "uuid": "564b1237-e11f-4000-bf60-da5d4e58dd01", 00:16:39.652 "is_configured": false, 00:16:39.652 "data_offset": 0, 00:16:39.652 "data_size": 65536 00:16:39.652 } 00:16:39.652 ] 00:16:39.652 }' 00:16:39.652 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.652 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.252 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.252 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:40.252 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:40.252 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:40.524 [2024-07-25 13:16:50.934713] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.524 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:40.524 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.524 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.524 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:40.524 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:40.524 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:40.524 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.524 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.524 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.524 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.524 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.524 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.794 13:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.794 "name": "Existed_Raid", 00:16:40.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.794 "strip_size_kb": 0, 00:16:40.794 "state": "configuring", 00:16:40.794 "raid_level": "raid1", 00:16:40.794 "superblock": false, 00:16:40.794 "num_base_bdevs": 3, 00:16:40.794 "num_base_bdevs_discovered": 2, 00:16:40.794 "num_base_bdevs_operational": 3, 00:16:40.794 "base_bdevs_list": [ 00:16:40.794 { 00:16:40.794 "name": "BaseBdev1", 00:16:40.794 "uuid": "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867", 00:16:40.794 "is_configured": true, 00:16:40.794 "data_offset": 0, 00:16:40.794 "data_size": 65536 00:16:40.794 }, 00:16:40.794 { 00:16:40.794 "name": null, 00:16:40.794 "uuid": "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de", 00:16:40.794 "is_configured": false, 00:16:40.794 "data_offset": 0, 00:16:40.794 "data_size": 65536 00:16:40.794 }, 00:16:40.794 { 00:16:40.794 "name": "BaseBdev3", 00:16:40.794 "uuid": "564b1237-e11f-4000-bf60-da5d4e58dd01", 00:16:40.794 "is_configured": true, 00:16:40.794 "data_offset": 0, 00:16:40.794 "data_size": 65536 00:16:40.794 } 00:16:40.794 ] 00:16:40.794 }' 00:16:40.794 13:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.794 13:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.363 13:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.363 13:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:41.622 13:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:41.622 13:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:41.881 [2024-07-25 13:16:52.198212] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:41.881 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:41.881 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.881 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:41.881 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:41.881 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:41.881 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:41.881 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.881 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.881 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.881 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.881 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.881 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.140 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.140 "name": "Existed_Raid", 00:16:42.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.140 "strip_size_kb": 0, 00:16:42.140 "state": "configuring", 00:16:42.140 "raid_level": "raid1", 00:16:42.140 "superblock": false, 00:16:42.140 "num_base_bdevs": 3, 00:16:42.140 "num_base_bdevs_discovered": 1, 00:16:42.140 "num_base_bdevs_operational": 3, 00:16:42.140 "base_bdevs_list": [ 00:16:42.140 { 00:16:42.140 "name": null, 00:16:42.140 "uuid": "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867", 00:16:42.140 "is_configured": false, 00:16:42.140 "data_offset": 0, 00:16:42.140 "data_size": 65536 00:16:42.140 }, 00:16:42.140 { 00:16:42.140 "name": null, 00:16:42.140 "uuid": "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de", 00:16:42.140 "is_configured": false, 00:16:42.140 "data_offset": 0, 00:16:42.140 "data_size": 65536 00:16:42.140 }, 00:16:42.140 { 00:16:42.140 "name": "BaseBdev3", 00:16:42.140 "uuid": "564b1237-e11f-4000-bf60-da5d4e58dd01", 00:16:42.140 "is_configured": true, 00:16:42.140 "data_offset": 0, 00:16:42.140 "data_size": 65536 00:16:42.140 } 00:16:42.140 ] 00:16:42.140 }' 00:16:42.140 13:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.140 13:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.708 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.708 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:43.277 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:43.277 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:43.536 [2024-07-25 13:16:53.788500] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.536 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:43.536 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.536 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.536 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:43.536 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:43.536 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:43.536 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.536 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.536 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.536 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.536 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.536 13:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.795 13:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.795 "name": "Existed_Raid", 00:16:43.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.795 "strip_size_kb": 0, 00:16:43.795 "state": "configuring", 00:16:43.795 "raid_level": "raid1", 00:16:43.795 "superblock": false, 00:16:43.795 "num_base_bdevs": 3, 00:16:43.795 "num_base_bdevs_discovered": 2, 00:16:43.795 "num_base_bdevs_operational": 3, 00:16:43.795 "base_bdevs_list": [ 00:16:43.795 { 00:16:43.795 "name": null, 00:16:43.795 "uuid": "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867", 00:16:43.795 "is_configured": false, 00:16:43.795 "data_offset": 0, 00:16:43.795 "data_size": 65536 00:16:43.795 }, 00:16:43.795 { 00:16:43.795 "name": "BaseBdev2", 00:16:43.796 "uuid": "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de", 00:16:43.796 "is_configured": true, 00:16:43.796 "data_offset": 0, 00:16:43.796 "data_size": 65536 00:16:43.796 }, 00:16:43.796 { 00:16:43.796 "name": "BaseBdev3", 00:16:43.796 "uuid": "564b1237-e11f-4000-bf60-da5d4e58dd01", 00:16:43.796 "is_configured": true, 00:16:43.796 "data_offset": 0, 00:16:43.796 "data_size": 65536 00:16:43.796 } 00:16:43.796 ] 00:16:43.796 }' 00:16:43.796 13:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.796 13:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.364 13:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.364 13:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:44.933 13:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:44.933 13:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.934 13:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:44.934 13:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f6332cbe-3ea0-4fd1-9e37-f5a5e5547867 00:16:45.193 [2024-07-25 13:16:55.596437] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:45.193 [2024-07-25 13:16:55.596471] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x11984f0 00:16:45.193 [2024-07-25 13:16:55.596479] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:45.193 [2024-07-25 13:16:55.596655] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11a2c50 00:16:45.193 [2024-07-25 13:16:55.596763] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11984f0 00:16:45.193 [2024-07-25 13:16:55.596772] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x11984f0 00:16:45.193 [2024-07-25 13:16:55.596921] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.193 NewBaseBdev 00:16:45.193 13:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:45.193 13:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:45.193 13:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:45.193 13:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:45.193 13:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:45.193 13:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:45.193 13:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.452 13:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:45.711 [ 00:16:45.711 { 00:16:45.711 "name": "NewBaseBdev", 00:16:45.711 "aliases": [ 00:16:45.711 "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867" 00:16:45.711 ], 00:16:45.711 "product_name": "Malloc disk", 00:16:45.711 "block_size": 512, 00:16:45.711 "num_blocks": 65536, 00:16:45.711 "uuid": "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867", 00:16:45.711 "assigned_rate_limits": { 00:16:45.711 "rw_ios_per_sec": 0, 00:16:45.711 "rw_mbytes_per_sec": 0, 00:16:45.711 "r_mbytes_per_sec": 0, 00:16:45.711 "w_mbytes_per_sec": 0 00:16:45.711 }, 00:16:45.711 "claimed": true, 00:16:45.711 "claim_type": "exclusive_write", 00:16:45.711 "zoned": false, 00:16:45.711 "supported_io_types": { 00:16:45.711 "read": true, 00:16:45.711 "write": true, 00:16:45.711 "unmap": true, 00:16:45.711 "flush": true, 00:16:45.711 "reset": true, 00:16:45.711 "nvme_admin": false, 00:16:45.711 "nvme_io": false, 00:16:45.711 "nvme_io_md": false, 00:16:45.711 "write_zeroes": true, 00:16:45.711 "zcopy": true, 00:16:45.711 "get_zone_info": false, 00:16:45.711 "zone_management": false, 00:16:45.711 "zone_append": false, 00:16:45.711 "compare": false, 00:16:45.711 "compare_and_write": false, 00:16:45.711 "abort": true, 00:16:45.711 "seek_hole": false, 00:16:45.711 "seek_data": false, 00:16:45.711 "copy": true, 00:16:45.711 "nvme_iov_md": false 00:16:45.711 }, 00:16:45.711 "memory_domains": [ 00:16:45.711 { 00:16:45.711 "dma_device_id": "system", 00:16:45.711 "dma_device_type": 1 00:16:45.711 }, 00:16:45.711 { 00:16:45.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.711 "dma_device_type": 2 00:16:45.711 } 00:16:45.711 ], 00:16:45.711 "driver_specific": {} 00:16:45.711 } 00:16:45.711 ] 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.711 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.971 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.971 "name": "Existed_Raid", 00:16:45.971 "uuid": "fe1bf2a9-0bf6-4f5b-9856-5cd7c79ccc93", 00:16:45.971 "strip_size_kb": 0, 00:16:45.971 "state": "online", 00:16:45.971 "raid_level": "raid1", 00:16:45.971 "superblock": false, 00:16:45.971 "num_base_bdevs": 3, 00:16:45.971 "num_base_bdevs_discovered": 3, 00:16:45.971 "num_base_bdevs_operational": 3, 00:16:45.971 "base_bdevs_list": [ 00:16:45.971 { 00:16:45.971 "name": "NewBaseBdev", 00:16:45.971 "uuid": "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867", 00:16:45.971 "is_configured": true, 00:16:45.971 "data_offset": 0, 00:16:45.971 "data_size": 65536 00:16:45.971 }, 00:16:45.971 { 00:16:45.971 "name": "BaseBdev2", 00:16:45.971 "uuid": "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de", 00:16:45.971 "is_configured": true, 00:16:45.971 "data_offset": 0, 00:16:45.971 "data_size": 65536 00:16:45.971 }, 00:16:45.971 { 00:16:45.971 "name": "BaseBdev3", 00:16:45.971 "uuid": "564b1237-e11f-4000-bf60-da5d4e58dd01", 00:16:45.971 "is_configured": true, 00:16:45.971 "data_offset": 0, 00:16:45.971 "data_size": 65536 00:16:45.971 } 00:16:45.971 ] 00:16:45.971 }' 00:16:45.971 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.971 13:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.539 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:46.539 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:46.539 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:46.539 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:46.539 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:46.539 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:46.539 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:46.539 13:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:46.798 [2024-07-25 13:16:57.064586] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.798 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:46.798 "name": "Existed_Raid", 00:16:46.798 "aliases": [ 00:16:46.798 "fe1bf2a9-0bf6-4f5b-9856-5cd7c79ccc93" 00:16:46.798 ], 00:16:46.798 "product_name": "Raid Volume", 00:16:46.798 "block_size": 512, 00:16:46.798 "num_blocks": 65536, 00:16:46.798 "uuid": "fe1bf2a9-0bf6-4f5b-9856-5cd7c79ccc93", 00:16:46.798 "assigned_rate_limits": { 00:16:46.798 "rw_ios_per_sec": 0, 00:16:46.798 "rw_mbytes_per_sec": 0, 00:16:46.798 "r_mbytes_per_sec": 0, 00:16:46.798 "w_mbytes_per_sec": 0 00:16:46.798 }, 00:16:46.798 "claimed": false, 00:16:46.798 "zoned": false, 00:16:46.798 "supported_io_types": { 00:16:46.798 "read": true, 00:16:46.798 "write": true, 00:16:46.798 "unmap": false, 00:16:46.798 "flush": false, 00:16:46.798 "reset": true, 00:16:46.798 "nvme_admin": false, 00:16:46.798 "nvme_io": false, 00:16:46.798 "nvme_io_md": false, 00:16:46.798 "write_zeroes": true, 00:16:46.798 "zcopy": false, 00:16:46.798 "get_zone_info": false, 00:16:46.798 "zone_management": false, 00:16:46.798 "zone_append": false, 00:16:46.798 "compare": false, 00:16:46.798 "compare_and_write": false, 00:16:46.798 "abort": false, 00:16:46.798 "seek_hole": false, 00:16:46.798 "seek_data": false, 00:16:46.798 "copy": false, 00:16:46.798 "nvme_iov_md": false 00:16:46.798 }, 00:16:46.798 "memory_domains": [ 00:16:46.798 { 00:16:46.798 "dma_device_id": "system", 00:16:46.798 "dma_device_type": 1 00:16:46.798 }, 00:16:46.798 { 00:16:46.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.798 "dma_device_type": 2 00:16:46.798 }, 00:16:46.798 { 00:16:46.798 "dma_device_id": "system", 00:16:46.798 "dma_device_type": 1 00:16:46.798 }, 00:16:46.798 { 00:16:46.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.798 "dma_device_type": 2 00:16:46.798 }, 00:16:46.798 { 00:16:46.798 "dma_device_id": "system", 00:16:46.798 "dma_device_type": 1 00:16:46.798 }, 00:16:46.798 { 00:16:46.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.798 "dma_device_type": 2 00:16:46.798 } 00:16:46.798 ], 00:16:46.798 "driver_specific": { 00:16:46.798 "raid": { 00:16:46.798 "uuid": "fe1bf2a9-0bf6-4f5b-9856-5cd7c79ccc93", 00:16:46.798 "strip_size_kb": 0, 00:16:46.798 "state": "online", 00:16:46.798 "raid_level": "raid1", 00:16:46.798 "superblock": false, 00:16:46.798 "num_base_bdevs": 3, 00:16:46.798 "num_base_bdevs_discovered": 3, 00:16:46.798 "num_base_bdevs_operational": 3, 00:16:46.798 "base_bdevs_list": [ 00:16:46.798 { 00:16:46.798 "name": "NewBaseBdev", 00:16:46.798 "uuid": "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867", 00:16:46.798 "is_configured": true, 00:16:46.798 "data_offset": 0, 00:16:46.798 "data_size": 65536 00:16:46.798 }, 00:16:46.798 { 00:16:46.798 "name": "BaseBdev2", 00:16:46.798 "uuid": "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de", 00:16:46.798 "is_configured": true, 00:16:46.798 "data_offset": 0, 00:16:46.798 "data_size": 65536 00:16:46.798 }, 00:16:46.798 { 00:16:46.798 "name": "BaseBdev3", 00:16:46.798 "uuid": "564b1237-e11f-4000-bf60-da5d4e58dd01", 00:16:46.798 "is_configured": true, 00:16:46.798 "data_offset": 0, 00:16:46.798 "data_size": 65536 00:16:46.798 } 00:16:46.798 ] 00:16:46.798 } 00:16:46.798 } 00:16:46.798 }' 00:16:46.798 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:46.798 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:46.798 BaseBdev2 00:16:46.798 BaseBdev3' 00:16:46.798 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:46.798 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:46.798 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:47.058 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:47.058 "name": "NewBaseBdev", 00:16:47.058 "aliases": [ 00:16:47.058 "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867" 00:16:47.058 ], 00:16:47.058 "product_name": "Malloc disk", 00:16:47.058 "block_size": 512, 00:16:47.058 "num_blocks": 65536, 00:16:47.058 "uuid": "f6332cbe-3ea0-4fd1-9e37-f5a5e5547867", 00:16:47.058 "assigned_rate_limits": { 00:16:47.058 "rw_ios_per_sec": 0, 00:16:47.058 "rw_mbytes_per_sec": 0, 00:16:47.058 "r_mbytes_per_sec": 0, 00:16:47.058 "w_mbytes_per_sec": 0 00:16:47.058 }, 00:16:47.058 "claimed": true, 00:16:47.058 "claim_type": "exclusive_write", 00:16:47.058 "zoned": false, 00:16:47.058 "supported_io_types": { 00:16:47.058 "read": true, 00:16:47.058 "write": true, 00:16:47.058 "unmap": true, 00:16:47.058 "flush": true, 00:16:47.058 "reset": true, 00:16:47.058 "nvme_admin": false, 00:16:47.058 "nvme_io": false, 00:16:47.058 "nvme_io_md": false, 00:16:47.058 "write_zeroes": true, 00:16:47.058 "zcopy": true, 00:16:47.058 "get_zone_info": false, 00:16:47.058 "zone_management": false, 00:16:47.058 "zone_append": false, 00:16:47.058 "compare": false, 00:16:47.058 "compare_and_write": false, 00:16:47.058 "abort": true, 00:16:47.058 "seek_hole": false, 00:16:47.058 "seek_data": false, 00:16:47.058 "copy": true, 00:16:47.058 "nvme_iov_md": false 00:16:47.058 }, 00:16:47.058 "memory_domains": [ 00:16:47.058 { 00:16:47.058 "dma_device_id": "system", 00:16:47.058 "dma_device_type": 1 00:16:47.058 }, 00:16:47.058 { 00:16:47.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.058 "dma_device_type": 2 00:16:47.058 } 00:16:47.058 ], 00:16:47.058 "driver_specific": {} 00:16:47.058 }' 00:16:47.058 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:47.058 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:47.058 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:47.058 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:47.058 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:47.058 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:47.058 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:47.317 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:47.317 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:47.317 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:47.317 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:47.318 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:47.318 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:47.318 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:47.318 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:47.577 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:47.577 "name": "BaseBdev2", 00:16:47.577 "aliases": [ 00:16:47.577 "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de" 00:16:47.577 ], 00:16:47.577 "product_name": "Malloc disk", 00:16:47.577 "block_size": 512, 00:16:47.577 "num_blocks": 65536, 00:16:47.577 "uuid": "4b1942e3-0ce6-44bb-ad4d-d63c1e9ec4de", 00:16:47.577 "assigned_rate_limits": { 00:16:47.577 "rw_ios_per_sec": 0, 00:16:47.577 "rw_mbytes_per_sec": 0, 00:16:47.577 "r_mbytes_per_sec": 0, 00:16:47.577 "w_mbytes_per_sec": 0 00:16:47.577 }, 00:16:47.577 "claimed": true, 00:16:47.577 "claim_type": "exclusive_write", 00:16:47.577 "zoned": false, 00:16:47.577 "supported_io_types": { 00:16:47.577 "read": true, 00:16:47.577 "write": true, 00:16:47.577 "unmap": true, 00:16:47.577 "flush": true, 00:16:47.577 "reset": true, 00:16:47.577 "nvme_admin": false, 00:16:47.577 "nvme_io": false, 00:16:47.577 "nvme_io_md": false, 00:16:47.577 "write_zeroes": true, 00:16:47.577 "zcopy": true, 00:16:47.577 "get_zone_info": false, 00:16:47.577 "zone_management": false, 00:16:47.577 "zone_append": false, 00:16:47.577 "compare": false, 00:16:47.577 "compare_and_write": false, 00:16:47.577 "abort": true, 00:16:47.577 "seek_hole": false, 00:16:47.577 "seek_data": false, 00:16:47.577 "copy": true, 00:16:47.577 "nvme_iov_md": false 00:16:47.577 }, 00:16:47.577 "memory_domains": [ 00:16:47.577 { 00:16:47.577 "dma_device_id": "system", 00:16:47.577 "dma_device_type": 1 00:16:47.577 }, 00:16:47.577 { 00:16:47.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.577 "dma_device_type": 2 00:16:47.577 } 00:16:47.577 ], 00:16:47.577 "driver_specific": {} 00:16:47.577 }' 00:16:47.577 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:47.577 13:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:47.577 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:47.577 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:47.577 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:47.836 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:47.836 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:47.836 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:47.836 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:47.836 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:47.836 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:47.836 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:47.836 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:47.836 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:48.096 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:48.096 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:48.096 "name": "BaseBdev3", 00:16:48.096 "aliases": [ 00:16:48.096 "564b1237-e11f-4000-bf60-da5d4e58dd01" 00:16:48.096 ], 00:16:48.096 "product_name": "Malloc disk", 00:16:48.096 "block_size": 512, 00:16:48.096 "num_blocks": 65536, 00:16:48.096 "uuid": "564b1237-e11f-4000-bf60-da5d4e58dd01", 00:16:48.096 "assigned_rate_limits": { 00:16:48.096 "rw_ios_per_sec": 0, 00:16:48.096 "rw_mbytes_per_sec": 0, 00:16:48.096 "r_mbytes_per_sec": 0, 00:16:48.096 "w_mbytes_per_sec": 0 00:16:48.096 }, 00:16:48.096 "claimed": true, 00:16:48.096 "claim_type": "exclusive_write", 00:16:48.096 "zoned": false, 00:16:48.096 "supported_io_types": { 00:16:48.096 "read": true, 00:16:48.096 "write": true, 00:16:48.096 "unmap": true, 00:16:48.096 "flush": true, 00:16:48.096 "reset": true, 00:16:48.096 "nvme_admin": false, 00:16:48.096 "nvme_io": false, 00:16:48.096 "nvme_io_md": false, 00:16:48.096 "write_zeroes": true, 00:16:48.096 "zcopy": true, 00:16:48.096 "get_zone_info": false, 00:16:48.096 "zone_management": false, 00:16:48.096 "zone_append": false, 00:16:48.096 "compare": false, 00:16:48.096 "compare_and_write": false, 00:16:48.096 "abort": true, 00:16:48.096 "seek_hole": false, 00:16:48.096 "seek_data": false, 00:16:48.096 "copy": true, 00:16:48.096 "nvme_iov_md": false 00:16:48.096 }, 00:16:48.096 "memory_domains": [ 00:16:48.096 { 00:16:48.096 "dma_device_id": "system", 00:16:48.096 "dma_device_type": 1 00:16:48.096 }, 00:16:48.096 { 00:16:48.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.096 "dma_device_type": 2 00:16:48.096 } 00:16:48.096 ], 00:16:48.096 "driver_specific": {} 00:16:48.096 }' 00:16:48.096 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:48.356 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:48.356 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:48.356 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:48.356 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:48.356 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:48.356 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:48.356 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:48.356 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:48.356 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:48.615 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:48.615 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:48.615 13:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:48.874 [2024-07-25 13:16:59.121909] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:48.874 [2024-07-25 13:16:59.121931] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.874 [2024-07-25 13:16:59.121975] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.874 [2024-07-25 13:16:59.122216] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.874 [2024-07-25 13:16:59.122228] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11984f0 name Existed_Raid, state offline 00:16:48.874 13:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 882539 00:16:48.874 13:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 882539 ']' 00:16:48.874 13:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 882539 00:16:48.874 13:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:48.874 13:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:48.874 13:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 882539 00:16:48.874 13:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:48.874 13:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:48.874 13:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 882539' 00:16:48.874 killing process with pid 882539 00:16:48.874 13:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 882539 00:16:48.874 [2024-07-25 13:16:59.191942] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.874 13:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 882539 00:16:48.874 [2024-07-25 13:16:59.215128] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:49.134 00:16:49.134 real 0m27.223s 00:16:49.134 user 0m50.084s 00:16:49.134 sys 0m4.739s 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.134 ************************************ 00:16:49.134 END TEST raid_state_function_test 00:16:49.134 ************************************ 00:16:49.134 13:16:59 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:49.134 13:16:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:49.134 13:16:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.134 13:16:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.134 ************************************ 00:16:49.134 START TEST raid_state_function_test_sb 00:16:49.134 ************************************ 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:49.134 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=887857 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 887857' 00:16:49.135 Process raid pid: 887857 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 887857 /var/tmp/spdk-raid.sock 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 887857 ']' 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:49.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.135 13:16:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.135 [2024-07-25 13:16:59.549669] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:16:49.135 [2024-07-25 13:16:59.549726] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.135 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.135 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.135 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.135 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.135 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.135 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.135 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.135 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.135 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.135 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.135 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:49.395 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:49.395 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:49.395 [2024-07-25 13:16:59.680535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.395 [2024-07-25 13:16:59.766223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.395 [2024-07-25 13:16:59.823216] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.395 [2024-07-25 13:16:59.823248] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.333 13:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.333 13:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:50.333 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:50.593 [2024-07-25 13:17:00.873505] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:50.593 [2024-07-25 13:17:00.873544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:50.593 [2024-07-25 13:17:00.873554] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.593 [2024-07-25 13:17:00.873565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.593 [2024-07-25 13:17:00.873574] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:50.593 [2024-07-25 13:17:00.873585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:50.593 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:50.593 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:50.593 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:50.593 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:50.593 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:50.593 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:50.593 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.593 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.593 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.593 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.593 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.593 13:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.162 13:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.162 "name": "Existed_Raid", 00:16:51.162 "uuid": "b4cddee1-d366-4006-90fc-b07320119326", 00:16:51.162 "strip_size_kb": 0, 00:16:51.162 "state": "configuring", 00:16:51.162 "raid_level": "raid1", 00:16:51.162 "superblock": true, 00:16:51.162 "num_base_bdevs": 3, 00:16:51.162 "num_base_bdevs_discovered": 0, 00:16:51.162 "num_base_bdevs_operational": 3, 00:16:51.162 "base_bdevs_list": [ 00:16:51.162 { 00:16:51.162 "name": "BaseBdev1", 00:16:51.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.162 "is_configured": false, 00:16:51.162 "data_offset": 0, 00:16:51.162 "data_size": 0 00:16:51.162 }, 00:16:51.162 { 00:16:51.162 "name": "BaseBdev2", 00:16:51.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.162 "is_configured": false, 00:16:51.162 "data_offset": 0, 00:16:51.162 "data_size": 0 00:16:51.162 }, 00:16:51.162 { 00:16:51.162 "name": "BaseBdev3", 00:16:51.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.162 "is_configured": false, 00:16:51.162 "data_offset": 0, 00:16:51.162 "data_size": 0 00:16:51.162 } 00:16:51.162 ] 00:16:51.162 }' 00:16:51.162 13:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.162 13:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.731 13:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:51.731 [2024-07-25 13:17:02.176793] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.731 [2024-07-25 13:17:02.176823] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10e6f40 name Existed_Raid, state configuring 00:16:51.731 13:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:51.990 [2024-07-25 13:17:02.405414] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.990 [2024-07-25 13:17:02.405444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.990 [2024-07-25 13:17:02.405453] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.990 [2024-07-25 13:17:02.405464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.990 [2024-07-25 13:17:02.405472] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.990 [2024-07-25 13:17:02.405482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.990 13:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:52.249 [2024-07-25 13:17:02.643440] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.250 BaseBdev1 00:16:52.250 13:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:52.250 13:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:52.250 13:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:52.250 13:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:52.250 13:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:52.250 13:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:52.250 13:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.509 13:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:52.770 [ 00:16:52.770 { 00:16:52.770 "name": "BaseBdev1", 00:16:52.770 "aliases": [ 00:16:52.770 "12932a97-ddb1-480d-b98a-c8e916614ddf" 00:16:52.770 ], 00:16:52.770 "product_name": "Malloc disk", 00:16:52.770 "block_size": 512, 00:16:52.770 "num_blocks": 65536, 00:16:52.770 "uuid": "12932a97-ddb1-480d-b98a-c8e916614ddf", 00:16:52.770 "assigned_rate_limits": { 00:16:52.770 "rw_ios_per_sec": 0, 00:16:52.770 "rw_mbytes_per_sec": 0, 00:16:52.770 "r_mbytes_per_sec": 0, 00:16:52.770 "w_mbytes_per_sec": 0 00:16:52.770 }, 00:16:52.770 "claimed": true, 00:16:52.770 "claim_type": "exclusive_write", 00:16:52.770 "zoned": false, 00:16:52.770 "supported_io_types": { 00:16:52.770 "read": true, 00:16:52.770 "write": true, 00:16:52.770 "unmap": true, 00:16:52.770 "flush": true, 00:16:52.770 "reset": true, 00:16:52.770 "nvme_admin": false, 00:16:52.770 "nvme_io": false, 00:16:52.770 "nvme_io_md": false, 00:16:52.770 "write_zeroes": true, 00:16:52.770 "zcopy": true, 00:16:52.770 "get_zone_info": false, 00:16:52.770 "zone_management": false, 00:16:52.770 "zone_append": false, 00:16:52.770 "compare": false, 00:16:52.770 "compare_and_write": false, 00:16:52.770 "abort": true, 00:16:52.770 "seek_hole": false, 00:16:52.770 "seek_data": false, 00:16:52.770 "copy": true, 00:16:52.770 "nvme_iov_md": false 00:16:52.770 }, 00:16:52.770 "memory_domains": [ 00:16:52.770 { 00:16:52.770 "dma_device_id": "system", 00:16:52.770 "dma_device_type": 1 00:16:52.770 }, 00:16:52.770 { 00:16:52.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.770 "dma_device_type": 2 00:16:52.770 } 00:16:52.770 ], 00:16:52.770 "driver_specific": {} 00:16:52.770 } 00:16:52.770 ] 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.770 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.087 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:53.087 "name": "Existed_Raid", 00:16:53.087 "uuid": "6a359bcd-bc03-43c1-bf9e-94b3ec123246", 00:16:53.087 "strip_size_kb": 0, 00:16:53.087 "state": "configuring", 00:16:53.087 "raid_level": "raid1", 00:16:53.087 "superblock": true, 00:16:53.087 "num_base_bdevs": 3, 00:16:53.087 "num_base_bdevs_discovered": 1, 00:16:53.087 "num_base_bdevs_operational": 3, 00:16:53.087 "base_bdevs_list": [ 00:16:53.087 { 00:16:53.087 "name": "BaseBdev1", 00:16:53.087 "uuid": "12932a97-ddb1-480d-b98a-c8e916614ddf", 00:16:53.087 "is_configured": true, 00:16:53.087 "data_offset": 2048, 00:16:53.087 "data_size": 63488 00:16:53.087 }, 00:16:53.087 { 00:16:53.087 "name": "BaseBdev2", 00:16:53.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.087 "is_configured": false, 00:16:53.087 "data_offset": 0, 00:16:53.087 "data_size": 0 00:16:53.087 }, 00:16:53.087 { 00:16:53.087 "name": "BaseBdev3", 00:16:53.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.087 "is_configured": false, 00:16:53.087 "data_offset": 0, 00:16:53.087 "data_size": 0 00:16:53.087 } 00:16:53.087 ] 00:16:53.087 }' 00:16:53.087 13:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:53.087 13:17:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.026 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:54.026 [2024-07-25 13:17:04.412097] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.026 [2024-07-25 13:17:04.412131] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10e6810 name Existed_Raid, state configuring 00:16:54.026 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:54.285 [2024-07-25 13:17:04.640943] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.285 [2024-07-25 13:17:04.642332] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.285 [2024-07-25 13:17:04.642363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.285 [2024-07-25 13:17:04.642372] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.285 [2024-07-25 13:17:04.642383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.285 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.286 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.544 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.545 "name": "Existed_Raid", 00:16:54.545 "uuid": "e95f3f27-c3ea-46b6-8e4b-7797ee80ab11", 00:16:54.545 "strip_size_kb": 0, 00:16:54.545 "state": "configuring", 00:16:54.545 "raid_level": "raid1", 00:16:54.545 "superblock": true, 00:16:54.545 "num_base_bdevs": 3, 00:16:54.545 "num_base_bdevs_discovered": 1, 00:16:54.545 "num_base_bdevs_operational": 3, 00:16:54.545 "base_bdevs_list": [ 00:16:54.545 { 00:16:54.545 "name": "BaseBdev1", 00:16:54.545 "uuid": "12932a97-ddb1-480d-b98a-c8e916614ddf", 00:16:54.545 "is_configured": true, 00:16:54.545 "data_offset": 2048, 00:16:54.545 "data_size": 63488 00:16:54.545 }, 00:16:54.545 { 00:16:54.545 "name": "BaseBdev2", 00:16:54.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.545 "is_configured": false, 00:16:54.545 "data_offset": 0, 00:16:54.545 "data_size": 0 00:16:54.545 }, 00:16:54.545 { 00:16:54.545 "name": "BaseBdev3", 00:16:54.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.545 "is_configured": false, 00:16:54.545 "data_offset": 0, 00:16:54.545 "data_size": 0 00:16:54.545 } 00:16:54.545 ] 00:16:54.545 }' 00:16:54.545 13:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.545 13:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.113 13:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:55.373 [2024-07-25 13:17:05.682866] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.373 BaseBdev2 00:16:55.373 13:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:55.373 13:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:55.373 13:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:55.373 13:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:55.373 13:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:55.373 13:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:55.373 13:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:55.632 13:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:55.891 [ 00:16:55.891 { 00:16:55.891 "name": "BaseBdev2", 00:16:55.891 "aliases": [ 00:16:55.891 "7e703bd6-052a-4a85-ab97-7c51a18c6702" 00:16:55.891 ], 00:16:55.891 "product_name": "Malloc disk", 00:16:55.891 "block_size": 512, 00:16:55.891 "num_blocks": 65536, 00:16:55.891 "uuid": "7e703bd6-052a-4a85-ab97-7c51a18c6702", 00:16:55.891 "assigned_rate_limits": { 00:16:55.891 "rw_ios_per_sec": 0, 00:16:55.891 "rw_mbytes_per_sec": 0, 00:16:55.891 "r_mbytes_per_sec": 0, 00:16:55.891 "w_mbytes_per_sec": 0 00:16:55.891 }, 00:16:55.891 "claimed": true, 00:16:55.891 "claim_type": "exclusive_write", 00:16:55.891 "zoned": false, 00:16:55.891 "supported_io_types": { 00:16:55.891 "read": true, 00:16:55.891 "write": true, 00:16:55.891 "unmap": true, 00:16:55.891 "flush": true, 00:16:55.891 "reset": true, 00:16:55.891 "nvme_admin": false, 00:16:55.891 "nvme_io": false, 00:16:55.891 "nvme_io_md": false, 00:16:55.891 "write_zeroes": true, 00:16:55.891 "zcopy": true, 00:16:55.891 "get_zone_info": false, 00:16:55.891 "zone_management": false, 00:16:55.891 "zone_append": false, 00:16:55.891 "compare": false, 00:16:55.891 "compare_and_write": false, 00:16:55.891 "abort": true, 00:16:55.891 "seek_hole": false, 00:16:55.891 "seek_data": false, 00:16:55.891 "copy": true, 00:16:55.891 "nvme_iov_md": false 00:16:55.891 }, 00:16:55.891 "memory_domains": [ 00:16:55.891 { 00:16:55.891 "dma_device_id": "system", 00:16:55.891 "dma_device_type": 1 00:16:55.891 }, 00:16:55.891 { 00:16:55.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.891 "dma_device_type": 2 00:16:55.891 } 00:16:55.891 ], 00:16:55.891 "driver_specific": {} 00:16:55.891 } 00:16:55.891 ] 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.891 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.150 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.150 "name": "Existed_Raid", 00:16:56.150 "uuid": "e95f3f27-c3ea-46b6-8e4b-7797ee80ab11", 00:16:56.150 "strip_size_kb": 0, 00:16:56.150 "state": "configuring", 00:16:56.150 "raid_level": "raid1", 00:16:56.150 "superblock": true, 00:16:56.150 "num_base_bdevs": 3, 00:16:56.150 "num_base_bdevs_discovered": 2, 00:16:56.150 "num_base_bdevs_operational": 3, 00:16:56.150 "base_bdevs_list": [ 00:16:56.150 { 00:16:56.150 "name": "BaseBdev1", 00:16:56.150 "uuid": "12932a97-ddb1-480d-b98a-c8e916614ddf", 00:16:56.151 "is_configured": true, 00:16:56.151 "data_offset": 2048, 00:16:56.151 "data_size": 63488 00:16:56.151 }, 00:16:56.151 { 00:16:56.151 "name": "BaseBdev2", 00:16:56.151 "uuid": "7e703bd6-052a-4a85-ab97-7c51a18c6702", 00:16:56.151 "is_configured": true, 00:16:56.151 "data_offset": 2048, 00:16:56.151 "data_size": 63488 00:16:56.151 }, 00:16:56.151 { 00:16:56.151 "name": "BaseBdev3", 00:16:56.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.151 "is_configured": false, 00:16:56.151 "data_offset": 0, 00:16:56.151 "data_size": 0 00:16:56.151 } 00:16:56.151 ] 00:16:56.151 }' 00:16:56.151 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.151 13:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.719 13:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:56.719 [2024-07-25 13:17:07.182083] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:56.719 [2024-07-25 13:17:07.182233] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x10e7710 00:16:56.719 [2024-07-25 13:17:07.182247] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:56.719 [2024-07-25 13:17:07.182408] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10de360 00:16:56.719 [2024-07-25 13:17:07.182523] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10e7710 00:16:56.719 [2024-07-25 13:17:07.182533] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10e7710 00:16:56.719 [2024-07-25 13:17:07.182620] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.719 BaseBdev3 00:16:56.719 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:56.719 13:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:56.719 13:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:56.719 13:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:56.719 13:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:56.719 13:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:56.719 13:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.978 13:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:57.238 [ 00:16:57.238 { 00:16:57.238 "name": "BaseBdev3", 00:16:57.238 "aliases": [ 00:16:57.238 "6fd79fac-6379-4107-bc72-ab3b5185e28c" 00:16:57.238 ], 00:16:57.238 "product_name": "Malloc disk", 00:16:57.238 "block_size": 512, 00:16:57.238 "num_blocks": 65536, 00:16:57.238 "uuid": "6fd79fac-6379-4107-bc72-ab3b5185e28c", 00:16:57.238 "assigned_rate_limits": { 00:16:57.238 "rw_ios_per_sec": 0, 00:16:57.238 "rw_mbytes_per_sec": 0, 00:16:57.238 "r_mbytes_per_sec": 0, 00:16:57.238 "w_mbytes_per_sec": 0 00:16:57.238 }, 00:16:57.238 "claimed": true, 00:16:57.238 "claim_type": "exclusive_write", 00:16:57.238 "zoned": false, 00:16:57.238 "supported_io_types": { 00:16:57.238 "read": true, 00:16:57.238 "write": true, 00:16:57.238 "unmap": true, 00:16:57.238 "flush": true, 00:16:57.238 "reset": true, 00:16:57.238 "nvme_admin": false, 00:16:57.238 "nvme_io": false, 00:16:57.238 "nvme_io_md": false, 00:16:57.238 "write_zeroes": true, 00:16:57.238 "zcopy": true, 00:16:57.238 "get_zone_info": false, 00:16:57.238 "zone_management": false, 00:16:57.238 "zone_append": false, 00:16:57.238 "compare": false, 00:16:57.238 "compare_and_write": false, 00:16:57.238 "abort": true, 00:16:57.238 "seek_hole": false, 00:16:57.238 "seek_data": false, 00:16:57.238 "copy": true, 00:16:57.238 "nvme_iov_md": false 00:16:57.238 }, 00:16:57.238 "memory_domains": [ 00:16:57.238 { 00:16:57.238 "dma_device_id": "system", 00:16:57.238 "dma_device_type": 1 00:16:57.238 }, 00:16:57.238 { 00:16:57.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.238 "dma_device_type": 2 00:16:57.238 } 00:16:57.238 ], 00:16:57.238 "driver_specific": {} 00:16:57.238 } 00:16:57.238 ] 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.238 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.497 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.497 "name": "Existed_Raid", 00:16:57.497 "uuid": "e95f3f27-c3ea-46b6-8e4b-7797ee80ab11", 00:16:57.497 "strip_size_kb": 0, 00:16:57.497 "state": "online", 00:16:57.497 "raid_level": "raid1", 00:16:57.497 "superblock": true, 00:16:57.497 "num_base_bdevs": 3, 00:16:57.497 "num_base_bdevs_discovered": 3, 00:16:57.497 "num_base_bdevs_operational": 3, 00:16:57.497 "base_bdevs_list": [ 00:16:57.497 { 00:16:57.497 "name": "BaseBdev1", 00:16:57.497 "uuid": "12932a97-ddb1-480d-b98a-c8e916614ddf", 00:16:57.497 "is_configured": true, 00:16:57.497 "data_offset": 2048, 00:16:57.497 "data_size": 63488 00:16:57.497 }, 00:16:57.497 { 00:16:57.497 "name": "BaseBdev2", 00:16:57.497 "uuid": "7e703bd6-052a-4a85-ab97-7c51a18c6702", 00:16:57.497 "is_configured": true, 00:16:57.497 "data_offset": 2048, 00:16:57.498 "data_size": 63488 00:16:57.498 }, 00:16:57.498 { 00:16:57.498 "name": "BaseBdev3", 00:16:57.498 "uuid": "6fd79fac-6379-4107-bc72-ab3b5185e28c", 00:16:57.498 "is_configured": true, 00:16:57.498 "data_offset": 2048, 00:16:57.498 "data_size": 63488 00:16:57.498 } 00:16:57.498 ] 00:16:57.498 }' 00:16:57.498 13:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.498 13:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.065 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:58.065 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:58.065 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:58.065 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:58.065 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:58.065 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:58.065 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:58.065 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:58.324 [2024-07-25 13:17:08.646231] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.324 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:58.324 "name": "Existed_Raid", 00:16:58.324 "aliases": [ 00:16:58.324 "e95f3f27-c3ea-46b6-8e4b-7797ee80ab11" 00:16:58.324 ], 00:16:58.324 "product_name": "Raid Volume", 00:16:58.324 "block_size": 512, 00:16:58.324 "num_blocks": 63488, 00:16:58.324 "uuid": "e95f3f27-c3ea-46b6-8e4b-7797ee80ab11", 00:16:58.324 "assigned_rate_limits": { 00:16:58.324 "rw_ios_per_sec": 0, 00:16:58.324 "rw_mbytes_per_sec": 0, 00:16:58.324 "r_mbytes_per_sec": 0, 00:16:58.324 "w_mbytes_per_sec": 0 00:16:58.324 }, 00:16:58.324 "claimed": false, 00:16:58.324 "zoned": false, 00:16:58.324 "supported_io_types": { 00:16:58.324 "read": true, 00:16:58.324 "write": true, 00:16:58.324 "unmap": false, 00:16:58.324 "flush": false, 00:16:58.324 "reset": true, 00:16:58.324 "nvme_admin": false, 00:16:58.324 "nvme_io": false, 00:16:58.324 "nvme_io_md": false, 00:16:58.324 "write_zeroes": true, 00:16:58.324 "zcopy": false, 00:16:58.324 "get_zone_info": false, 00:16:58.324 "zone_management": false, 00:16:58.324 "zone_append": false, 00:16:58.324 "compare": false, 00:16:58.324 "compare_and_write": false, 00:16:58.324 "abort": false, 00:16:58.324 "seek_hole": false, 00:16:58.324 "seek_data": false, 00:16:58.324 "copy": false, 00:16:58.324 "nvme_iov_md": false 00:16:58.324 }, 00:16:58.324 "memory_domains": [ 00:16:58.324 { 00:16:58.324 "dma_device_id": "system", 00:16:58.324 "dma_device_type": 1 00:16:58.324 }, 00:16:58.324 { 00:16:58.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.324 "dma_device_type": 2 00:16:58.324 }, 00:16:58.324 { 00:16:58.324 "dma_device_id": "system", 00:16:58.324 "dma_device_type": 1 00:16:58.324 }, 00:16:58.324 { 00:16:58.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.324 "dma_device_type": 2 00:16:58.324 }, 00:16:58.324 { 00:16:58.324 "dma_device_id": "system", 00:16:58.324 "dma_device_type": 1 00:16:58.324 }, 00:16:58.324 { 00:16:58.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.324 "dma_device_type": 2 00:16:58.324 } 00:16:58.324 ], 00:16:58.324 "driver_specific": { 00:16:58.324 "raid": { 00:16:58.324 "uuid": "e95f3f27-c3ea-46b6-8e4b-7797ee80ab11", 00:16:58.324 "strip_size_kb": 0, 00:16:58.324 "state": "online", 00:16:58.324 "raid_level": "raid1", 00:16:58.324 "superblock": true, 00:16:58.324 "num_base_bdevs": 3, 00:16:58.324 "num_base_bdevs_discovered": 3, 00:16:58.324 "num_base_bdevs_operational": 3, 00:16:58.324 "base_bdevs_list": [ 00:16:58.324 { 00:16:58.324 "name": "BaseBdev1", 00:16:58.324 "uuid": "12932a97-ddb1-480d-b98a-c8e916614ddf", 00:16:58.324 "is_configured": true, 00:16:58.324 "data_offset": 2048, 00:16:58.324 "data_size": 63488 00:16:58.324 }, 00:16:58.324 { 00:16:58.324 "name": "BaseBdev2", 00:16:58.324 "uuid": "7e703bd6-052a-4a85-ab97-7c51a18c6702", 00:16:58.324 "is_configured": true, 00:16:58.324 "data_offset": 2048, 00:16:58.324 "data_size": 63488 00:16:58.324 }, 00:16:58.324 { 00:16:58.324 "name": "BaseBdev3", 00:16:58.324 "uuid": "6fd79fac-6379-4107-bc72-ab3b5185e28c", 00:16:58.324 "is_configured": true, 00:16:58.324 "data_offset": 2048, 00:16:58.324 "data_size": 63488 00:16:58.324 } 00:16:58.324 ] 00:16:58.324 } 00:16:58.324 } 00:16:58.324 }' 00:16:58.324 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.324 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:58.324 BaseBdev2 00:16:58.324 BaseBdev3' 00:16:58.324 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:58.324 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:58.324 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:58.582 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:58.582 "name": "BaseBdev1", 00:16:58.582 "aliases": [ 00:16:58.582 "12932a97-ddb1-480d-b98a-c8e916614ddf" 00:16:58.582 ], 00:16:58.582 "product_name": "Malloc disk", 00:16:58.582 "block_size": 512, 00:16:58.582 "num_blocks": 65536, 00:16:58.582 "uuid": "12932a97-ddb1-480d-b98a-c8e916614ddf", 00:16:58.582 "assigned_rate_limits": { 00:16:58.582 "rw_ios_per_sec": 0, 00:16:58.582 "rw_mbytes_per_sec": 0, 00:16:58.582 "r_mbytes_per_sec": 0, 00:16:58.582 "w_mbytes_per_sec": 0 00:16:58.582 }, 00:16:58.582 "claimed": true, 00:16:58.582 "claim_type": "exclusive_write", 00:16:58.582 "zoned": false, 00:16:58.582 "supported_io_types": { 00:16:58.582 "read": true, 00:16:58.582 "write": true, 00:16:58.582 "unmap": true, 00:16:58.582 "flush": true, 00:16:58.582 "reset": true, 00:16:58.582 "nvme_admin": false, 00:16:58.582 "nvme_io": false, 00:16:58.582 "nvme_io_md": false, 00:16:58.582 "write_zeroes": true, 00:16:58.582 "zcopy": true, 00:16:58.582 "get_zone_info": false, 00:16:58.582 "zone_management": false, 00:16:58.582 "zone_append": false, 00:16:58.582 "compare": false, 00:16:58.582 "compare_and_write": false, 00:16:58.582 "abort": true, 00:16:58.582 "seek_hole": false, 00:16:58.582 "seek_data": false, 00:16:58.582 "copy": true, 00:16:58.582 "nvme_iov_md": false 00:16:58.582 }, 00:16:58.582 "memory_domains": [ 00:16:58.582 { 00:16:58.582 "dma_device_id": "system", 00:16:58.582 "dma_device_type": 1 00:16:58.582 }, 00:16:58.582 { 00:16:58.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.582 "dma_device_type": 2 00:16:58.582 } 00:16:58.582 ], 00:16:58.582 "driver_specific": {} 00:16:58.582 }' 00:16:58.582 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.582 13:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.582 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:58.582 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.582 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.840 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:58.840 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.840 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.840 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:58.840 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.840 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.840 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:58.840 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:58.840 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:58.840 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:59.097 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:59.097 "name": "BaseBdev2", 00:16:59.097 "aliases": [ 00:16:59.097 "7e703bd6-052a-4a85-ab97-7c51a18c6702" 00:16:59.097 ], 00:16:59.097 "product_name": "Malloc disk", 00:16:59.097 "block_size": 512, 00:16:59.097 "num_blocks": 65536, 00:16:59.097 "uuid": "7e703bd6-052a-4a85-ab97-7c51a18c6702", 00:16:59.097 "assigned_rate_limits": { 00:16:59.097 "rw_ios_per_sec": 0, 00:16:59.097 "rw_mbytes_per_sec": 0, 00:16:59.097 "r_mbytes_per_sec": 0, 00:16:59.097 "w_mbytes_per_sec": 0 00:16:59.097 }, 00:16:59.097 "claimed": true, 00:16:59.097 "claim_type": "exclusive_write", 00:16:59.097 "zoned": false, 00:16:59.097 "supported_io_types": { 00:16:59.097 "read": true, 00:16:59.097 "write": true, 00:16:59.097 "unmap": true, 00:16:59.097 "flush": true, 00:16:59.097 "reset": true, 00:16:59.097 "nvme_admin": false, 00:16:59.097 "nvme_io": false, 00:16:59.097 "nvme_io_md": false, 00:16:59.097 "write_zeroes": true, 00:16:59.097 "zcopy": true, 00:16:59.097 "get_zone_info": false, 00:16:59.097 "zone_management": false, 00:16:59.097 "zone_append": false, 00:16:59.097 "compare": false, 00:16:59.097 "compare_and_write": false, 00:16:59.097 "abort": true, 00:16:59.098 "seek_hole": false, 00:16:59.098 "seek_data": false, 00:16:59.098 "copy": true, 00:16:59.098 "nvme_iov_md": false 00:16:59.098 }, 00:16:59.098 "memory_domains": [ 00:16:59.098 { 00:16:59.098 "dma_device_id": "system", 00:16:59.098 "dma_device_type": 1 00:16:59.098 }, 00:16:59.098 { 00:16:59.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.098 "dma_device_type": 2 00:16:59.098 } 00:16:59.098 ], 00:16:59.098 "driver_specific": {} 00:16:59.098 }' 00:16:59.098 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.098 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.098 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:59.098 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.356 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.356 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:59.356 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.356 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.356 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.356 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.356 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.356 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.356 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.356 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:59.356 13:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:59.615 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:59.615 "name": "BaseBdev3", 00:16:59.615 "aliases": [ 00:16:59.615 "6fd79fac-6379-4107-bc72-ab3b5185e28c" 00:16:59.615 ], 00:16:59.615 "product_name": "Malloc disk", 00:16:59.615 "block_size": 512, 00:16:59.615 "num_blocks": 65536, 00:16:59.615 "uuid": "6fd79fac-6379-4107-bc72-ab3b5185e28c", 00:16:59.615 "assigned_rate_limits": { 00:16:59.615 "rw_ios_per_sec": 0, 00:16:59.615 "rw_mbytes_per_sec": 0, 00:16:59.615 "r_mbytes_per_sec": 0, 00:16:59.615 "w_mbytes_per_sec": 0 00:16:59.615 }, 00:16:59.615 "claimed": true, 00:16:59.615 "claim_type": "exclusive_write", 00:16:59.615 "zoned": false, 00:16:59.615 "supported_io_types": { 00:16:59.615 "read": true, 00:16:59.615 "write": true, 00:16:59.615 "unmap": true, 00:16:59.615 "flush": true, 00:16:59.615 "reset": true, 00:16:59.615 "nvme_admin": false, 00:16:59.615 "nvme_io": false, 00:16:59.615 "nvme_io_md": false, 00:16:59.615 "write_zeroes": true, 00:16:59.615 "zcopy": true, 00:16:59.615 "get_zone_info": false, 00:16:59.615 "zone_management": false, 00:16:59.615 "zone_append": false, 00:16:59.615 "compare": false, 00:16:59.615 "compare_and_write": false, 00:16:59.615 "abort": true, 00:16:59.615 "seek_hole": false, 00:16:59.615 "seek_data": false, 00:16:59.615 "copy": true, 00:16:59.615 "nvme_iov_md": false 00:16:59.615 }, 00:16:59.615 "memory_domains": [ 00:16:59.615 { 00:16:59.615 "dma_device_id": "system", 00:16:59.615 "dma_device_type": 1 00:16:59.615 }, 00:16:59.615 { 00:16:59.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.615 "dma_device_type": 2 00:16:59.615 } 00:16:59.615 ], 00:16:59.615 "driver_specific": {} 00:16:59.615 }' 00:16:59.615 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.874 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.874 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:59.874 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.874 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.874 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:59.874 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.874 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.874 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.874 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.133 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.133 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.133 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:00.133 [2024-07-25 13:17:10.615195] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:00.392 "name": "Existed_Raid", 00:17:00.392 "uuid": "e95f3f27-c3ea-46b6-8e4b-7797ee80ab11", 00:17:00.392 "strip_size_kb": 0, 00:17:00.392 "state": "online", 00:17:00.392 "raid_level": "raid1", 00:17:00.392 "superblock": true, 00:17:00.392 "num_base_bdevs": 3, 00:17:00.392 "num_base_bdevs_discovered": 2, 00:17:00.392 "num_base_bdevs_operational": 2, 00:17:00.392 "base_bdevs_list": [ 00:17:00.392 { 00:17:00.392 "name": null, 00:17:00.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.392 "is_configured": false, 00:17:00.392 "data_offset": 2048, 00:17:00.392 "data_size": 63488 00:17:00.392 }, 00:17:00.392 { 00:17:00.392 "name": "BaseBdev2", 00:17:00.392 "uuid": "7e703bd6-052a-4a85-ab97-7c51a18c6702", 00:17:00.392 "is_configured": true, 00:17:00.392 "data_offset": 2048, 00:17:00.392 "data_size": 63488 00:17:00.392 }, 00:17:00.392 { 00:17:00.392 "name": "BaseBdev3", 00:17:00.392 "uuid": "6fd79fac-6379-4107-bc72-ab3b5185e28c", 00:17:00.392 "is_configured": true, 00:17:00.392 "data_offset": 2048, 00:17:00.392 "data_size": 63488 00:17:00.392 } 00:17:00.392 ] 00:17:00.392 }' 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:00.392 13:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.961 13:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:00.961 13:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:00.961 13:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.961 13:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:01.219 13:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:01.219 13:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:01.219 13:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:01.478 [2024-07-25 13:17:11.863549] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:01.478 13:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:01.478 13:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:01.478 13:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.478 13:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:01.737 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:01.737 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:01.737 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:01.996 [2024-07-25 13:17:12.330854] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:01.996 [2024-07-25 13:17:12.330924] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.996 [2024-07-25 13:17:12.341173] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.996 [2024-07-25 13:17:12.341204] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.996 [2024-07-25 13:17:12.341215] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10e7710 name Existed_Raid, state offline 00:17:01.996 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:01.996 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:01.996 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.996 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:02.255 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:02.255 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:02.255 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:02.255 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:02.255 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:02.255 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:02.514 BaseBdev2 00:17:02.514 13:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:02.514 13:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:02.514 13:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:02.514 13:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:02.514 13:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:02.514 13:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:02.514 13:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:02.772 13:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:02.772 [ 00:17:02.772 { 00:17:02.772 "name": "BaseBdev2", 00:17:02.772 "aliases": [ 00:17:02.772 "58a557f8-fead-40c1-a845-1d791f212484" 00:17:02.772 ], 00:17:02.772 "product_name": "Malloc disk", 00:17:02.772 "block_size": 512, 00:17:02.772 "num_blocks": 65536, 00:17:02.772 "uuid": "58a557f8-fead-40c1-a845-1d791f212484", 00:17:02.772 "assigned_rate_limits": { 00:17:02.772 "rw_ios_per_sec": 0, 00:17:02.772 "rw_mbytes_per_sec": 0, 00:17:02.772 "r_mbytes_per_sec": 0, 00:17:02.772 "w_mbytes_per_sec": 0 00:17:02.772 }, 00:17:02.772 "claimed": false, 00:17:02.772 "zoned": false, 00:17:02.772 "supported_io_types": { 00:17:02.772 "read": true, 00:17:02.772 "write": true, 00:17:02.772 "unmap": true, 00:17:02.772 "flush": true, 00:17:02.772 "reset": true, 00:17:02.772 "nvme_admin": false, 00:17:02.772 "nvme_io": false, 00:17:02.772 "nvme_io_md": false, 00:17:02.772 "write_zeroes": true, 00:17:02.772 "zcopy": true, 00:17:02.772 "get_zone_info": false, 00:17:02.772 "zone_management": false, 00:17:02.772 "zone_append": false, 00:17:02.772 "compare": false, 00:17:02.772 "compare_and_write": false, 00:17:02.772 "abort": true, 00:17:02.772 "seek_hole": false, 00:17:02.772 "seek_data": false, 00:17:02.772 "copy": true, 00:17:02.772 "nvme_iov_md": false 00:17:02.772 }, 00:17:02.772 "memory_domains": [ 00:17:02.772 { 00:17:02.772 "dma_device_id": "system", 00:17:02.772 "dma_device_type": 1 00:17:02.772 }, 00:17:02.772 { 00:17:02.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.772 "dma_device_type": 2 00:17:02.772 } 00:17:02.772 ], 00:17:02.772 "driver_specific": {} 00:17:02.772 } 00:17:02.772 ] 00:17:03.031 13:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:03.031 13:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:03.031 13:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:03.031 13:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:03.031 BaseBdev3 00:17:03.031 13:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:03.031 13:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:03.031 13:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:03.031 13:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:03.031 13:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:03.031 13:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:03.031 13:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:03.289 13:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:03.548 [ 00:17:03.548 { 00:17:03.548 "name": "BaseBdev3", 00:17:03.548 "aliases": [ 00:17:03.548 "e30275d9-8b3b-4e36-bcd0-f693f52df466" 00:17:03.548 ], 00:17:03.548 "product_name": "Malloc disk", 00:17:03.548 "block_size": 512, 00:17:03.548 "num_blocks": 65536, 00:17:03.548 "uuid": "e30275d9-8b3b-4e36-bcd0-f693f52df466", 00:17:03.548 "assigned_rate_limits": { 00:17:03.548 "rw_ios_per_sec": 0, 00:17:03.548 "rw_mbytes_per_sec": 0, 00:17:03.548 "r_mbytes_per_sec": 0, 00:17:03.548 "w_mbytes_per_sec": 0 00:17:03.548 }, 00:17:03.548 "claimed": false, 00:17:03.548 "zoned": false, 00:17:03.548 "supported_io_types": { 00:17:03.548 "read": true, 00:17:03.548 "write": true, 00:17:03.548 "unmap": true, 00:17:03.548 "flush": true, 00:17:03.548 "reset": true, 00:17:03.548 "nvme_admin": false, 00:17:03.548 "nvme_io": false, 00:17:03.548 "nvme_io_md": false, 00:17:03.548 "write_zeroes": true, 00:17:03.548 "zcopy": true, 00:17:03.548 "get_zone_info": false, 00:17:03.548 "zone_management": false, 00:17:03.548 "zone_append": false, 00:17:03.548 "compare": false, 00:17:03.548 "compare_and_write": false, 00:17:03.548 "abort": true, 00:17:03.548 "seek_hole": false, 00:17:03.548 "seek_data": false, 00:17:03.548 "copy": true, 00:17:03.548 "nvme_iov_md": false 00:17:03.548 }, 00:17:03.548 "memory_domains": [ 00:17:03.548 { 00:17:03.548 "dma_device_id": "system", 00:17:03.548 "dma_device_type": 1 00:17:03.548 }, 00:17:03.548 { 00:17:03.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.548 "dma_device_type": 2 00:17:03.548 } 00:17:03.548 ], 00:17:03.548 "driver_specific": {} 00:17:03.548 } 00:17:03.548 ] 00:17:03.548 13:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:03.548 13:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:03.548 13:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:03.548 13:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:03.808 [2024-07-25 13:17:14.137999] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.808 [2024-07-25 13:17:14.138034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.808 [2024-07-25 13:17:14.138051] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.808 [2024-07-25 13:17:14.139272] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.808 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:03.808 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:03.808 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:03.808 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:03.808 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:03.808 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:03.808 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.808 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.808 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.808 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.808 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.808 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.067 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.067 "name": "Existed_Raid", 00:17:04.067 "uuid": "25af8971-a7bb-4b30-a4be-17ceeb0f97d5", 00:17:04.067 "strip_size_kb": 0, 00:17:04.067 "state": "configuring", 00:17:04.067 "raid_level": "raid1", 00:17:04.067 "superblock": true, 00:17:04.067 "num_base_bdevs": 3, 00:17:04.067 "num_base_bdevs_discovered": 2, 00:17:04.067 "num_base_bdevs_operational": 3, 00:17:04.067 "base_bdevs_list": [ 00:17:04.067 { 00:17:04.067 "name": "BaseBdev1", 00:17:04.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.067 "is_configured": false, 00:17:04.067 "data_offset": 0, 00:17:04.067 "data_size": 0 00:17:04.067 }, 00:17:04.067 { 00:17:04.067 "name": "BaseBdev2", 00:17:04.067 "uuid": "58a557f8-fead-40c1-a845-1d791f212484", 00:17:04.067 "is_configured": true, 00:17:04.067 "data_offset": 2048, 00:17:04.067 "data_size": 63488 00:17:04.067 }, 00:17:04.067 { 00:17:04.067 "name": "BaseBdev3", 00:17:04.067 "uuid": "e30275d9-8b3b-4e36-bcd0-f693f52df466", 00:17:04.067 "is_configured": true, 00:17:04.067 "data_offset": 2048, 00:17:04.067 "data_size": 63488 00:17:04.067 } 00:17:04.067 ] 00:17:04.067 }' 00:17:04.067 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.067 13:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.635 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:04.894 [2024-07-25 13:17:15.156647] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.894 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:04.894 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:04.894 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:04.894 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:04.894 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:04.894 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:04.894 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.894 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.894 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.894 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.894 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.894 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.153 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.153 "name": "Existed_Raid", 00:17:05.153 "uuid": "25af8971-a7bb-4b30-a4be-17ceeb0f97d5", 00:17:05.153 "strip_size_kb": 0, 00:17:05.154 "state": "configuring", 00:17:05.154 "raid_level": "raid1", 00:17:05.154 "superblock": true, 00:17:05.154 "num_base_bdevs": 3, 00:17:05.154 "num_base_bdevs_discovered": 1, 00:17:05.154 "num_base_bdevs_operational": 3, 00:17:05.154 "base_bdevs_list": [ 00:17:05.154 { 00:17:05.154 "name": "BaseBdev1", 00:17:05.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.154 "is_configured": false, 00:17:05.154 "data_offset": 0, 00:17:05.154 "data_size": 0 00:17:05.154 }, 00:17:05.154 { 00:17:05.154 "name": null, 00:17:05.154 "uuid": "58a557f8-fead-40c1-a845-1d791f212484", 00:17:05.154 "is_configured": false, 00:17:05.154 "data_offset": 2048, 00:17:05.154 "data_size": 63488 00:17:05.154 }, 00:17:05.154 { 00:17:05.154 "name": "BaseBdev3", 00:17:05.154 "uuid": "e30275d9-8b3b-4e36-bcd0-f693f52df466", 00:17:05.154 "is_configured": true, 00:17:05.154 "data_offset": 2048, 00:17:05.154 "data_size": 63488 00:17:05.154 } 00:17:05.154 ] 00:17:05.154 }' 00:17:05.154 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.154 13:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.785 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.785 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:05.785 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:05.785 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:06.045 [2024-07-25 13:17:16.427198] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.045 BaseBdev1 00:17:06.045 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:06.045 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:06.045 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:06.045 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:06.045 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:06.045 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:06.045 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:06.303 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:06.563 [ 00:17:06.563 { 00:17:06.563 "name": "BaseBdev1", 00:17:06.563 "aliases": [ 00:17:06.563 "5a3923e5-6630-439a-ae39-f09ca4d39060" 00:17:06.563 ], 00:17:06.563 "product_name": "Malloc disk", 00:17:06.563 "block_size": 512, 00:17:06.563 "num_blocks": 65536, 00:17:06.563 "uuid": "5a3923e5-6630-439a-ae39-f09ca4d39060", 00:17:06.563 "assigned_rate_limits": { 00:17:06.563 "rw_ios_per_sec": 0, 00:17:06.563 "rw_mbytes_per_sec": 0, 00:17:06.563 "r_mbytes_per_sec": 0, 00:17:06.563 "w_mbytes_per_sec": 0 00:17:06.563 }, 00:17:06.563 "claimed": true, 00:17:06.563 "claim_type": "exclusive_write", 00:17:06.563 "zoned": false, 00:17:06.563 "supported_io_types": { 00:17:06.563 "read": true, 00:17:06.563 "write": true, 00:17:06.563 "unmap": true, 00:17:06.563 "flush": true, 00:17:06.563 "reset": true, 00:17:06.563 "nvme_admin": false, 00:17:06.563 "nvme_io": false, 00:17:06.563 "nvme_io_md": false, 00:17:06.563 "write_zeroes": true, 00:17:06.563 "zcopy": true, 00:17:06.563 "get_zone_info": false, 00:17:06.563 "zone_management": false, 00:17:06.563 "zone_append": false, 00:17:06.563 "compare": false, 00:17:06.563 "compare_and_write": false, 00:17:06.563 "abort": true, 00:17:06.563 "seek_hole": false, 00:17:06.563 "seek_data": false, 00:17:06.563 "copy": true, 00:17:06.563 "nvme_iov_md": false 00:17:06.563 }, 00:17:06.563 "memory_domains": [ 00:17:06.563 { 00:17:06.563 "dma_device_id": "system", 00:17:06.563 "dma_device_type": 1 00:17:06.563 }, 00:17:06.563 { 00:17:06.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.563 "dma_device_type": 2 00:17:06.563 } 00:17:06.563 ], 00:17:06.563 "driver_specific": {} 00:17:06.563 } 00:17:06.563 ] 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.563 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.822 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.822 "name": "Existed_Raid", 00:17:06.822 "uuid": "25af8971-a7bb-4b30-a4be-17ceeb0f97d5", 00:17:06.822 "strip_size_kb": 0, 00:17:06.822 "state": "configuring", 00:17:06.822 "raid_level": "raid1", 00:17:06.822 "superblock": true, 00:17:06.822 "num_base_bdevs": 3, 00:17:06.822 "num_base_bdevs_discovered": 2, 00:17:06.822 "num_base_bdevs_operational": 3, 00:17:06.822 "base_bdevs_list": [ 00:17:06.822 { 00:17:06.822 "name": "BaseBdev1", 00:17:06.822 "uuid": "5a3923e5-6630-439a-ae39-f09ca4d39060", 00:17:06.822 "is_configured": true, 00:17:06.822 "data_offset": 2048, 00:17:06.822 "data_size": 63488 00:17:06.822 }, 00:17:06.822 { 00:17:06.822 "name": null, 00:17:06.822 "uuid": "58a557f8-fead-40c1-a845-1d791f212484", 00:17:06.822 "is_configured": false, 00:17:06.822 "data_offset": 2048, 00:17:06.822 "data_size": 63488 00:17:06.822 }, 00:17:06.822 { 00:17:06.822 "name": "BaseBdev3", 00:17:06.822 "uuid": "e30275d9-8b3b-4e36-bcd0-f693f52df466", 00:17:06.822 "is_configured": true, 00:17:06.822 "data_offset": 2048, 00:17:06.822 "data_size": 63488 00:17:06.822 } 00:17:06.822 ] 00:17:06.822 }' 00:17:06.822 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.822 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.390 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.390 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:07.650 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:07.650 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:07.909 [2024-07-25 13:17:18.139740] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.909 "name": "Existed_Raid", 00:17:07.909 "uuid": "25af8971-a7bb-4b30-a4be-17ceeb0f97d5", 00:17:07.909 "strip_size_kb": 0, 00:17:07.909 "state": "configuring", 00:17:07.909 "raid_level": "raid1", 00:17:07.909 "superblock": true, 00:17:07.909 "num_base_bdevs": 3, 00:17:07.909 "num_base_bdevs_discovered": 1, 00:17:07.909 "num_base_bdevs_operational": 3, 00:17:07.909 "base_bdevs_list": [ 00:17:07.909 { 00:17:07.909 "name": "BaseBdev1", 00:17:07.909 "uuid": "5a3923e5-6630-439a-ae39-f09ca4d39060", 00:17:07.909 "is_configured": true, 00:17:07.909 "data_offset": 2048, 00:17:07.909 "data_size": 63488 00:17:07.909 }, 00:17:07.909 { 00:17:07.909 "name": null, 00:17:07.909 "uuid": "58a557f8-fead-40c1-a845-1d791f212484", 00:17:07.909 "is_configured": false, 00:17:07.909 "data_offset": 2048, 00:17:07.909 "data_size": 63488 00:17:07.909 }, 00:17:07.909 { 00:17:07.909 "name": null, 00:17:07.909 "uuid": "e30275d9-8b3b-4e36-bcd0-f693f52df466", 00:17:07.909 "is_configured": false, 00:17:07.909 "data_offset": 2048, 00:17:07.909 "data_size": 63488 00:17:07.909 } 00:17:07.909 ] 00:17:07.909 }' 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.909 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.846 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:08.846 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.846 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:08.846 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:09.104 [2024-07-25 13:17:19.407306] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:09.104 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:09.104 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:09.104 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.104 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:09.104 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:09.104 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:09.104 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.104 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.104 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.104 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.104 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.104 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.363 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.363 "name": "Existed_Raid", 00:17:09.363 "uuid": "25af8971-a7bb-4b30-a4be-17ceeb0f97d5", 00:17:09.363 "strip_size_kb": 0, 00:17:09.363 "state": "configuring", 00:17:09.363 "raid_level": "raid1", 00:17:09.363 "superblock": true, 00:17:09.363 "num_base_bdevs": 3, 00:17:09.363 "num_base_bdevs_discovered": 2, 00:17:09.363 "num_base_bdevs_operational": 3, 00:17:09.363 "base_bdevs_list": [ 00:17:09.363 { 00:17:09.363 "name": "BaseBdev1", 00:17:09.363 "uuid": "5a3923e5-6630-439a-ae39-f09ca4d39060", 00:17:09.363 "is_configured": true, 00:17:09.363 "data_offset": 2048, 00:17:09.363 "data_size": 63488 00:17:09.363 }, 00:17:09.363 { 00:17:09.363 "name": null, 00:17:09.363 "uuid": "58a557f8-fead-40c1-a845-1d791f212484", 00:17:09.363 "is_configured": false, 00:17:09.363 "data_offset": 2048, 00:17:09.363 "data_size": 63488 00:17:09.363 }, 00:17:09.363 { 00:17:09.363 "name": "BaseBdev3", 00:17:09.363 "uuid": "e30275d9-8b3b-4e36-bcd0-f693f52df466", 00:17:09.363 "is_configured": true, 00:17:09.363 "data_offset": 2048, 00:17:09.363 "data_size": 63488 00:17:09.363 } 00:17:09.363 ] 00:17:09.363 }' 00:17:09.363 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.363 13:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.932 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.932 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:10.191 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:10.191 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:10.191 [2024-07-25 13:17:20.658634] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.451 "name": "Existed_Raid", 00:17:10.451 "uuid": "25af8971-a7bb-4b30-a4be-17ceeb0f97d5", 00:17:10.451 "strip_size_kb": 0, 00:17:10.451 "state": "configuring", 00:17:10.451 "raid_level": "raid1", 00:17:10.451 "superblock": true, 00:17:10.451 "num_base_bdevs": 3, 00:17:10.451 "num_base_bdevs_discovered": 1, 00:17:10.451 "num_base_bdevs_operational": 3, 00:17:10.451 "base_bdevs_list": [ 00:17:10.451 { 00:17:10.451 "name": null, 00:17:10.451 "uuid": "5a3923e5-6630-439a-ae39-f09ca4d39060", 00:17:10.451 "is_configured": false, 00:17:10.451 "data_offset": 2048, 00:17:10.451 "data_size": 63488 00:17:10.451 }, 00:17:10.451 { 00:17:10.451 "name": null, 00:17:10.451 "uuid": "58a557f8-fead-40c1-a845-1d791f212484", 00:17:10.451 "is_configured": false, 00:17:10.451 "data_offset": 2048, 00:17:10.451 "data_size": 63488 00:17:10.451 }, 00:17:10.451 { 00:17:10.451 "name": "BaseBdev3", 00:17:10.451 "uuid": "e30275d9-8b3b-4e36-bcd0-f693f52df466", 00:17:10.451 "is_configured": true, 00:17:10.451 "data_offset": 2048, 00:17:10.451 "data_size": 63488 00:17:10.451 } 00:17:10.451 ] 00:17:10.451 }' 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.451 13:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.020 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.020 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:11.279 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:11.279 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:11.539 [2024-07-25 13:17:21.927920] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.539 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:11.539 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.539 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:11.539 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:11.539 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:11.539 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:11.539 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.539 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.539 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.539 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.539 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.539 13:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.798 13:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.798 "name": "Existed_Raid", 00:17:11.798 "uuid": "25af8971-a7bb-4b30-a4be-17ceeb0f97d5", 00:17:11.798 "strip_size_kb": 0, 00:17:11.798 "state": "configuring", 00:17:11.798 "raid_level": "raid1", 00:17:11.798 "superblock": true, 00:17:11.798 "num_base_bdevs": 3, 00:17:11.798 "num_base_bdevs_discovered": 2, 00:17:11.798 "num_base_bdevs_operational": 3, 00:17:11.798 "base_bdevs_list": [ 00:17:11.798 { 00:17:11.798 "name": null, 00:17:11.798 "uuid": "5a3923e5-6630-439a-ae39-f09ca4d39060", 00:17:11.798 "is_configured": false, 00:17:11.798 "data_offset": 2048, 00:17:11.798 "data_size": 63488 00:17:11.798 }, 00:17:11.798 { 00:17:11.798 "name": "BaseBdev2", 00:17:11.798 "uuid": "58a557f8-fead-40c1-a845-1d791f212484", 00:17:11.798 "is_configured": true, 00:17:11.798 "data_offset": 2048, 00:17:11.798 "data_size": 63488 00:17:11.798 }, 00:17:11.798 { 00:17:11.798 "name": "BaseBdev3", 00:17:11.798 "uuid": "e30275d9-8b3b-4e36-bcd0-f693f52df466", 00:17:11.798 "is_configured": true, 00:17:11.798 "data_offset": 2048, 00:17:11.798 "data_size": 63488 00:17:11.798 } 00:17:11.798 ] 00:17:11.798 }' 00:17:11.798 13:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.798 13:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.367 13:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.367 13:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:12.626 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:12.626 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.626 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:12.886 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5a3923e5-6630-439a-ae39-f09ca4d39060 00:17:13.144 [2024-07-25 13:17:23.459123] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:13.144 [2024-07-25 13:17:23.459265] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x10e7380 00:17:13.144 [2024-07-25 13:17:23.459277] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:13.144 [2024-07-25 13:17:23.459436] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10e0f50 00:17:13.144 [2024-07-25 13:17:23.459551] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10e7380 00:17:13.144 [2024-07-25 13:17:23.459560] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10e7380 00:17:13.144 [2024-07-25 13:17:23.459646] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.144 NewBaseBdev 00:17:13.144 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:13.144 13:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:13.144 13:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:13.144 13:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:13.144 13:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:13.144 13:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:13.144 13:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:13.402 13:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:13.662 [ 00:17:13.662 { 00:17:13.662 "name": "NewBaseBdev", 00:17:13.662 "aliases": [ 00:17:13.662 "5a3923e5-6630-439a-ae39-f09ca4d39060" 00:17:13.662 ], 00:17:13.662 "product_name": "Malloc disk", 00:17:13.662 "block_size": 512, 00:17:13.662 "num_blocks": 65536, 00:17:13.662 "uuid": "5a3923e5-6630-439a-ae39-f09ca4d39060", 00:17:13.662 "assigned_rate_limits": { 00:17:13.662 "rw_ios_per_sec": 0, 00:17:13.662 "rw_mbytes_per_sec": 0, 00:17:13.662 "r_mbytes_per_sec": 0, 00:17:13.662 "w_mbytes_per_sec": 0 00:17:13.662 }, 00:17:13.662 "claimed": true, 00:17:13.662 "claim_type": "exclusive_write", 00:17:13.662 "zoned": false, 00:17:13.662 "supported_io_types": { 00:17:13.662 "read": true, 00:17:13.662 "write": true, 00:17:13.662 "unmap": true, 00:17:13.662 "flush": true, 00:17:13.662 "reset": true, 00:17:13.662 "nvme_admin": false, 00:17:13.662 "nvme_io": false, 00:17:13.662 "nvme_io_md": false, 00:17:13.662 "write_zeroes": true, 00:17:13.662 "zcopy": true, 00:17:13.662 "get_zone_info": false, 00:17:13.662 "zone_management": false, 00:17:13.662 "zone_append": false, 00:17:13.662 "compare": false, 00:17:13.662 "compare_and_write": false, 00:17:13.662 "abort": true, 00:17:13.662 "seek_hole": false, 00:17:13.662 "seek_data": false, 00:17:13.662 "copy": true, 00:17:13.662 "nvme_iov_md": false 00:17:13.662 }, 00:17:13.662 "memory_domains": [ 00:17:13.662 { 00:17:13.662 "dma_device_id": "system", 00:17:13.662 "dma_device_type": 1 00:17:13.662 }, 00:17:13.662 { 00:17:13.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.662 "dma_device_type": 2 00:17:13.662 } 00:17:13.662 ], 00:17:13.662 "driver_specific": {} 00:17:13.662 } 00:17:13.662 ] 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.662 13:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.921 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.921 "name": "Existed_Raid", 00:17:13.921 "uuid": "25af8971-a7bb-4b30-a4be-17ceeb0f97d5", 00:17:13.921 "strip_size_kb": 0, 00:17:13.921 "state": "online", 00:17:13.921 "raid_level": "raid1", 00:17:13.921 "superblock": true, 00:17:13.921 "num_base_bdevs": 3, 00:17:13.921 "num_base_bdevs_discovered": 3, 00:17:13.921 "num_base_bdevs_operational": 3, 00:17:13.921 "base_bdevs_list": [ 00:17:13.921 { 00:17:13.921 "name": "NewBaseBdev", 00:17:13.921 "uuid": "5a3923e5-6630-439a-ae39-f09ca4d39060", 00:17:13.921 "is_configured": true, 00:17:13.921 "data_offset": 2048, 00:17:13.921 "data_size": 63488 00:17:13.921 }, 00:17:13.921 { 00:17:13.921 "name": "BaseBdev2", 00:17:13.921 "uuid": "58a557f8-fead-40c1-a845-1d791f212484", 00:17:13.921 "is_configured": true, 00:17:13.921 "data_offset": 2048, 00:17:13.921 "data_size": 63488 00:17:13.921 }, 00:17:13.921 { 00:17:13.921 "name": "BaseBdev3", 00:17:13.921 "uuid": "e30275d9-8b3b-4e36-bcd0-f693f52df466", 00:17:13.921 "is_configured": true, 00:17:13.921 "data_offset": 2048, 00:17:13.921 "data_size": 63488 00:17:13.921 } 00:17:13.921 ] 00:17:13.922 }' 00:17:13.922 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.922 13:17:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.490 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:14.490 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:14.490 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:14.490 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:14.490 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:14.490 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:14.490 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:14.490 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:14.490 [2024-07-25 13:17:24.919266] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.490 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:14.490 "name": "Existed_Raid", 00:17:14.490 "aliases": [ 00:17:14.490 "25af8971-a7bb-4b30-a4be-17ceeb0f97d5" 00:17:14.490 ], 00:17:14.490 "product_name": "Raid Volume", 00:17:14.490 "block_size": 512, 00:17:14.490 "num_blocks": 63488, 00:17:14.490 "uuid": "25af8971-a7bb-4b30-a4be-17ceeb0f97d5", 00:17:14.490 "assigned_rate_limits": { 00:17:14.490 "rw_ios_per_sec": 0, 00:17:14.490 "rw_mbytes_per_sec": 0, 00:17:14.490 "r_mbytes_per_sec": 0, 00:17:14.490 "w_mbytes_per_sec": 0 00:17:14.490 }, 00:17:14.490 "claimed": false, 00:17:14.490 "zoned": false, 00:17:14.490 "supported_io_types": { 00:17:14.490 "read": true, 00:17:14.490 "write": true, 00:17:14.490 "unmap": false, 00:17:14.490 "flush": false, 00:17:14.490 "reset": true, 00:17:14.490 "nvme_admin": false, 00:17:14.490 "nvme_io": false, 00:17:14.490 "nvme_io_md": false, 00:17:14.490 "write_zeroes": true, 00:17:14.490 "zcopy": false, 00:17:14.490 "get_zone_info": false, 00:17:14.490 "zone_management": false, 00:17:14.490 "zone_append": false, 00:17:14.490 "compare": false, 00:17:14.490 "compare_and_write": false, 00:17:14.490 "abort": false, 00:17:14.490 "seek_hole": false, 00:17:14.490 "seek_data": false, 00:17:14.490 "copy": false, 00:17:14.490 "nvme_iov_md": false 00:17:14.490 }, 00:17:14.490 "memory_domains": [ 00:17:14.490 { 00:17:14.490 "dma_device_id": "system", 00:17:14.490 "dma_device_type": 1 00:17:14.490 }, 00:17:14.490 { 00:17:14.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.490 "dma_device_type": 2 00:17:14.490 }, 00:17:14.490 { 00:17:14.490 "dma_device_id": "system", 00:17:14.490 "dma_device_type": 1 00:17:14.490 }, 00:17:14.490 { 00:17:14.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.490 "dma_device_type": 2 00:17:14.490 }, 00:17:14.490 { 00:17:14.490 "dma_device_id": "system", 00:17:14.490 "dma_device_type": 1 00:17:14.490 }, 00:17:14.490 { 00:17:14.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.490 "dma_device_type": 2 00:17:14.490 } 00:17:14.490 ], 00:17:14.490 "driver_specific": { 00:17:14.490 "raid": { 00:17:14.490 "uuid": "25af8971-a7bb-4b30-a4be-17ceeb0f97d5", 00:17:14.490 "strip_size_kb": 0, 00:17:14.490 "state": "online", 00:17:14.490 "raid_level": "raid1", 00:17:14.490 "superblock": true, 00:17:14.490 "num_base_bdevs": 3, 00:17:14.490 "num_base_bdevs_discovered": 3, 00:17:14.490 "num_base_bdevs_operational": 3, 00:17:14.490 "base_bdevs_list": [ 00:17:14.490 { 00:17:14.490 "name": "NewBaseBdev", 00:17:14.490 "uuid": "5a3923e5-6630-439a-ae39-f09ca4d39060", 00:17:14.490 "is_configured": true, 00:17:14.490 "data_offset": 2048, 00:17:14.490 "data_size": 63488 00:17:14.490 }, 00:17:14.490 { 00:17:14.490 "name": "BaseBdev2", 00:17:14.490 "uuid": "58a557f8-fead-40c1-a845-1d791f212484", 00:17:14.490 "is_configured": true, 00:17:14.490 "data_offset": 2048, 00:17:14.490 "data_size": 63488 00:17:14.490 }, 00:17:14.491 { 00:17:14.491 "name": "BaseBdev3", 00:17:14.491 "uuid": "e30275d9-8b3b-4e36-bcd0-f693f52df466", 00:17:14.491 "is_configured": true, 00:17:14.491 "data_offset": 2048, 00:17:14.491 "data_size": 63488 00:17:14.491 } 00:17:14.491 ] 00:17:14.491 } 00:17:14.491 } 00:17:14.491 }' 00:17:14.491 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:14.750 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:14.750 BaseBdev2 00:17:14.750 BaseBdev3' 00:17:14.750 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:14.750 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:14.750 13:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:14.750 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:14.750 "name": "NewBaseBdev", 00:17:14.750 "aliases": [ 00:17:14.750 "5a3923e5-6630-439a-ae39-f09ca4d39060" 00:17:14.750 ], 00:17:14.750 "product_name": "Malloc disk", 00:17:14.750 "block_size": 512, 00:17:14.750 "num_blocks": 65536, 00:17:14.750 "uuid": "5a3923e5-6630-439a-ae39-f09ca4d39060", 00:17:14.750 "assigned_rate_limits": { 00:17:14.750 "rw_ios_per_sec": 0, 00:17:14.750 "rw_mbytes_per_sec": 0, 00:17:14.750 "r_mbytes_per_sec": 0, 00:17:14.750 "w_mbytes_per_sec": 0 00:17:14.750 }, 00:17:14.750 "claimed": true, 00:17:14.750 "claim_type": "exclusive_write", 00:17:14.750 "zoned": false, 00:17:14.750 "supported_io_types": { 00:17:14.750 "read": true, 00:17:14.750 "write": true, 00:17:14.750 "unmap": true, 00:17:14.750 "flush": true, 00:17:14.750 "reset": true, 00:17:14.750 "nvme_admin": false, 00:17:14.750 "nvme_io": false, 00:17:14.750 "nvme_io_md": false, 00:17:14.750 "write_zeroes": true, 00:17:14.750 "zcopy": true, 00:17:14.750 "get_zone_info": false, 00:17:14.750 "zone_management": false, 00:17:14.750 "zone_append": false, 00:17:14.750 "compare": false, 00:17:14.750 "compare_and_write": false, 00:17:14.750 "abort": true, 00:17:14.750 "seek_hole": false, 00:17:14.750 "seek_data": false, 00:17:14.750 "copy": true, 00:17:14.750 "nvme_iov_md": false 00:17:14.750 }, 00:17:14.750 "memory_domains": [ 00:17:14.750 { 00:17:14.750 "dma_device_id": "system", 00:17:14.750 "dma_device_type": 1 00:17:14.750 }, 00:17:14.750 { 00:17:14.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.750 "dma_device_type": 2 00:17:14.750 } 00:17:14.750 ], 00:17:14.750 "driver_specific": {} 00:17:14.750 }' 00:17:14.750 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.009 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.009 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:15.009 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.009 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.009 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:15.009 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.009 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.009 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:15.009 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.269 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.269 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.269 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:15.269 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:15.269 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:15.529 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:15.529 "name": "BaseBdev2", 00:17:15.529 "aliases": [ 00:17:15.529 "58a557f8-fead-40c1-a845-1d791f212484" 00:17:15.529 ], 00:17:15.529 "product_name": "Malloc disk", 00:17:15.529 "block_size": 512, 00:17:15.529 "num_blocks": 65536, 00:17:15.529 "uuid": "58a557f8-fead-40c1-a845-1d791f212484", 00:17:15.529 "assigned_rate_limits": { 00:17:15.529 "rw_ios_per_sec": 0, 00:17:15.529 "rw_mbytes_per_sec": 0, 00:17:15.529 "r_mbytes_per_sec": 0, 00:17:15.529 "w_mbytes_per_sec": 0 00:17:15.529 }, 00:17:15.529 "claimed": true, 00:17:15.529 "claim_type": "exclusive_write", 00:17:15.529 "zoned": false, 00:17:15.529 "supported_io_types": { 00:17:15.529 "read": true, 00:17:15.529 "write": true, 00:17:15.529 "unmap": true, 00:17:15.529 "flush": true, 00:17:15.529 "reset": true, 00:17:15.529 "nvme_admin": false, 00:17:15.529 "nvme_io": false, 00:17:15.529 "nvme_io_md": false, 00:17:15.529 "write_zeroes": true, 00:17:15.529 "zcopy": true, 00:17:15.529 "get_zone_info": false, 00:17:15.529 "zone_management": false, 00:17:15.529 "zone_append": false, 00:17:15.529 "compare": false, 00:17:15.529 "compare_and_write": false, 00:17:15.529 "abort": true, 00:17:15.529 "seek_hole": false, 00:17:15.529 "seek_data": false, 00:17:15.529 "copy": true, 00:17:15.529 "nvme_iov_md": false 00:17:15.529 }, 00:17:15.529 "memory_domains": [ 00:17:15.529 { 00:17:15.529 "dma_device_id": "system", 00:17:15.529 "dma_device_type": 1 00:17:15.529 }, 00:17:15.529 { 00:17:15.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.529 "dma_device_type": 2 00:17:15.529 } 00:17:15.529 ], 00:17:15.529 "driver_specific": {} 00:17:15.529 }' 00:17:15.529 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.529 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.529 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:15.529 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.529 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.529 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:15.529 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.529 13:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.788 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:15.788 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.788 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.788 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.788 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:15.788 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:15.788 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:16.047 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:16.047 "name": "BaseBdev3", 00:17:16.047 "aliases": [ 00:17:16.047 "e30275d9-8b3b-4e36-bcd0-f693f52df466" 00:17:16.047 ], 00:17:16.047 "product_name": "Malloc disk", 00:17:16.047 "block_size": 512, 00:17:16.047 "num_blocks": 65536, 00:17:16.047 "uuid": "e30275d9-8b3b-4e36-bcd0-f693f52df466", 00:17:16.047 "assigned_rate_limits": { 00:17:16.047 "rw_ios_per_sec": 0, 00:17:16.047 "rw_mbytes_per_sec": 0, 00:17:16.047 "r_mbytes_per_sec": 0, 00:17:16.047 "w_mbytes_per_sec": 0 00:17:16.047 }, 00:17:16.047 "claimed": true, 00:17:16.047 "claim_type": "exclusive_write", 00:17:16.047 "zoned": false, 00:17:16.047 "supported_io_types": { 00:17:16.047 "read": true, 00:17:16.047 "write": true, 00:17:16.047 "unmap": true, 00:17:16.047 "flush": true, 00:17:16.047 "reset": true, 00:17:16.047 "nvme_admin": false, 00:17:16.047 "nvme_io": false, 00:17:16.047 "nvme_io_md": false, 00:17:16.047 "write_zeroes": true, 00:17:16.047 "zcopy": true, 00:17:16.047 "get_zone_info": false, 00:17:16.047 "zone_management": false, 00:17:16.047 "zone_append": false, 00:17:16.047 "compare": false, 00:17:16.047 "compare_and_write": false, 00:17:16.047 "abort": true, 00:17:16.047 "seek_hole": false, 00:17:16.047 "seek_data": false, 00:17:16.047 "copy": true, 00:17:16.047 "nvme_iov_md": false 00:17:16.047 }, 00:17:16.047 "memory_domains": [ 00:17:16.047 { 00:17:16.047 "dma_device_id": "system", 00:17:16.047 "dma_device_type": 1 00:17:16.047 }, 00:17:16.047 { 00:17:16.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.047 "dma_device_type": 2 00:17:16.047 } 00:17:16.047 ], 00:17:16.047 "driver_specific": {} 00:17:16.047 }' 00:17:16.047 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.047 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.047 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:16.047 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.047 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.047 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:16.047 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.307 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.307 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:16.307 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.307 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.307 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:16.307 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:16.567 [2024-07-25 13:17:26.892219] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:16.567 [2024-07-25 13:17:26.892242] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.567 [2024-07-25 13:17:26.892285] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.567 [2024-07-25 13:17:26.892526] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.567 [2024-07-25 13:17:26.892538] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10e7380 name Existed_Raid, state offline 00:17:16.567 13:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 887857 00:17:16.567 13:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 887857 ']' 00:17:16.567 13:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 887857 00:17:16.567 13:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:16.567 13:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.567 13:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 887857 00:17:16.567 13:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:16.567 13:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:16.567 13:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 887857' 00:17:16.567 killing process with pid 887857 00:17:16.567 13:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 887857 00:17:16.567 [2024-07-25 13:17:26.960596] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.567 13:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 887857 00:17:16.567 [2024-07-25 13:17:26.985407] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:16.826 13:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:16.826 00:17:16.826 real 0m27.689s 00:17:16.826 user 0m50.868s 00:17:16.826 sys 0m4.954s 00:17:16.826 13:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.826 13:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.826 ************************************ 00:17:16.826 END TEST raid_state_function_test_sb 00:17:16.826 ************************************ 00:17:16.826 13:17:27 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:16.826 13:17:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:16.826 13:17:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.827 13:17:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:16.827 ************************************ 00:17:16.827 START TEST raid_superblock_test 00:17:16.827 ************************************ 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=893000 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 893000 /var/tmp/spdk-raid.sock 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 893000 ']' 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:16.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.827 13:17:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.087 [2024-07-25 13:17:27.328350] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:17:17.087 [2024-07-25 13:17:27.328412] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893000 ] 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:17.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:17.087 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:17.087 [2024-07-25 13:17:27.461533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.087 [2024-07-25 13:17:27.547681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.347 [2024-07-25 13:17:27.611830] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.347 [2024-07-25 13:17:27.611865] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.913 13:17:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:17.913 13:17:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:17.913 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:17:17.913 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:17.913 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:17:17.913 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:17:17.913 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:17.913 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:17.913 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:17.913 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:17.913 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:18.172 malloc1 00:17:18.172 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.172 [2024-07-25 13:17:28.648608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.172 [2024-07-25 13:17:28.648651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.172 [2024-07-25 13:17:28.648668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa702f0 00:17:18.172 [2024-07-25 13:17:28.648680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.172 [2024-07-25 13:17:28.650168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.172 [2024-07-25 13:17:28.650196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.172 pt1 00:17:18.433 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:18.433 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:18.433 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:17:18.433 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:17:18.433 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:18.433 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.433 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.433 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.433 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:18.433 malloc2 00:17:18.433 13:17:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:18.703 [2024-07-25 13:17:29.106325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:18.703 [2024-07-25 13:17:29.106368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.703 [2024-07-25 13:17:29.106384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc07f70 00:17:18.703 [2024-07-25 13:17:29.106396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.703 [2024-07-25 13:17:29.107855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.703 [2024-07-25 13:17:29.107883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:18.703 pt2 00:17:18.703 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:18.703 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:18.703 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:17:18.703 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:17:18.703 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:18.703 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.703 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.703 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.703 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:18.962 malloc3 00:17:18.962 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:19.221 [2024-07-25 13:17:29.571738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:19.221 [2024-07-25 13:17:29.571785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.221 [2024-07-25 13:17:29.571801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc0b830 00:17:19.221 [2024-07-25 13:17:29.571813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.221 [2024-07-25 13:17:29.573169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.221 [2024-07-25 13:17:29.573196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:19.221 pt3 00:17:19.221 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:19.221 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:19.222 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:19.481 [2024-07-25 13:17:29.784315] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.481 [2024-07-25 13:17:29.785425] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.481 [2024-07-25 13:17:29.785475] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:19.481 [2024-07-25 13:17:29.785596] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc0aab0 00:17:19.481 [2024-07-25 13:17:29.785606] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:19.481 [2024-07-25 13:17:29.785789] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc0d120 00:17:19.481 [2024-07-25 13:17:29.785913] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc0aab0 00:17:19.481 [2024-07-25 13:17:29.785922] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc0aab0 00:17:19.481 [2024-07-25 13:17:29.786020] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.481 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:19.481 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:19.481 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:19.481 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:19.481 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:19.481 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:19.481 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.481 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.481 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.481 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.481 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.481 13:17:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.740 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.740 "name": "raid_bdev1", 00:17:19.740 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:19.740 "strip_size_kb": 0, 00:17:19.740 "state": "online", 00:17:19.740 "raid_level": "raid1", 00:17:19.740 "superblock": true, 00:17:19.740 "num_base_bdevs": 3, 00:17:19.740 "num_base_bdevs_discovered": 3, 00:17:19.740 "num_base_bdevs_operational": 3, 00:17:19.740 "base_bdevs_list": [ 00:17:19.740 { 00:17:19.740 "name": "pt1", 00:17:19.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.740 "is_configured": true, 00:17:19.740 "data_offset": 2048, 00:17:19.740 "data_size": 63488 00:17:19.740 }, 00:17:19.740 { 00:17:19.740 "name": "pt2", 00:17:19.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.740 "is_configured": true, 00:17:19.740 "data_offset": 2048, 00:17:19.740 "data_size": 63488 00:17:19.740 }, 00:17:19.740 { 00:17:19.740 "name": "pt3", 00:17:19.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.740 "is_configured": true, 00:17:19.740 "data_offset": 2048, 00:17:19.740 "data_size": 63488 00:17:19.740 } 00:17:19.740 ] 00:17:19.740 }' 00:17:19.740 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.740 13:17:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.309 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:17:20.309 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:20.309 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:20.309 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:20.309 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:20.309 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:20.309 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:20.309 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:20.309 [2024-07-25 13:17:30.791202] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.569 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:20.569 "name": "raid_bdev1", 00:17:20.569 "aliases": [ 00:17:20.569 "1685fcbe-e59b-44d7-acc5-9cda31131ae2" 00:17:20.569 ], 00:17:20.569 "product_name": "Raid Volume", 00:17:20.569 "block_size": 512, 00:17:20.569 "num_blocks": 63488, 00:17:20.569 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:20.569 "assigned_rate_limits": { 00:17:20.569 "rw_ios_per_sec": 0, 00:17:20.569 "rw_mbytes_per_sec": 0, 00:17:20.569 "r_mbytes_per_sec": 0, 00:17:20.569 "w_mbytes_per_sec": 0 00:17:20.569 }, 00:17:20.569 "claimed": false, 00:17:20.569 "zoned": false, 00:17:20.569 "supported_io_types": { 00:17:20.569 "read": true, 00:17:20.569 "write": true, 00:17:20.569 "unmap": false, 00:17:20.569 "flush": false, 00:17:20.569 "reset": true, 00:17:20.569 "nvme_admin": false, 00:17:20.569 "nvme_io": false, 00:17:20.569 "nvme_io_md": false, 00:17:20.569 "write_zeroes": true, 00:17:20.569 "zcopy": false, 00:17:20.569 "get_zone_info": false, 00:17:20.569 "zone_management": false, 00:17:20.569 "zone_append": false, 00:17:20.569 "compare": false, 00:17:20.569 "compare_and_write": false, 00:17:20.569 "abort": false, 00:17:20.569 "seek_hole": false, 00:17:20.569 "seek_data": false, 00:17:20.569 "copy": false, 00:17:20.569 "nvme_iov_md": false 00:17:20.569 }, 00:17:20.569 "memory_domains": [ 00:17:20.569 { 00:17:20.569 "dma_device_id": "system", 00:17:20.569 "dma_device_type": 1 00:17:20.569 }, 00:17:20.569 { 00:17:20.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.569 "dma_device_type": 2 00:17:20.569 }, 00:17:20.569 { 00:17:20.569 "dma_device_id": "system", 00:17:20.569 "dma_device_type": 1 00:17:20.569 }, 00:17:20.569 { 00:17:20.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.569 "dma_device_type": 2 00:17:20.569 }, 00:17:20.569 { 00:17:20.569 "dma_device_id": "system", 00:17:20.569 "dma_device_type": 1 00:17:20.569 }, 00:17:20.569 { 00:17:20.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.569 "dma_device_type": 2 00:17:20.569 } 00:17:20.569 ], 00:17:20.569 "driver_specific": { 00:17:20.569 "raid": { 00:17:20.569 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:20.569 "strip_size_kb": 0, 00:17:20.569 "state": "online", 00:17:20.569 "raid_level": "raid1", 00:17:20.569 "superblock": true, 00:17:20.569 "num_base_bdevs": 3, 00:17:20.569 "num_base_bdevs_discovered": 3, 00:17:20.569 "num_base_bdevs_operational": 3, 00:17:20.569 "base_bdevs_list": [ 00:17:20.569 { 00:17:20.569 "name": "pt1", 00:17:20.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.569 "is_configured": true, 00:17:20.569 "data_offset": 2048, 00:17:20.569 "data_size": 63488 00:17:20.569 }, 00:17:20.569 { 00:17:20.569 "name": "pt2", 00:17:20.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.569 "is_configured": true, 00:17:20.569 "data_offset": 2048, 00:17:20.569 "data_size": 63488 00:17:20.569 }, 00:17:20.569 { 00:17:20.569 "name": "pt3", 00:17:20.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.569 "is_configured": true, 00:17:20.569 "data_offset": 2048, 00:17:20.569 "data_size": 63488 00:17:20.569 } 00:17:20.569 ] 00:17:20.569 } 00:17:20.569 } 00:17:20.569 }' 00:17:20.569 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:20.569 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:20.569 pt2 00:17:20.569 pt3' 00:17:20.569 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:20.569 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:20.569 13:17:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:20.828 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:20.828 "name": "pt1", 00:17:20.828 "aliases": [ 00:17:20.828 "00000000-0000-0000-0000-000000000001" 00:17:20.828 ], 00:17:20.828 "product_name": "passthru", 00:17:20.828 "block_size": 512, 00:17:20.828 "num_blocks": 65536, 00:17:20.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.828 "assigned_rate_limits": { 00:17:20.828 "rw_ios_per_sec": 0, 00:17:20.828 "rw_mbytes_per_sec": 0, 00:17:20.828 "r_mbytes_per_sec": 0, 00:17:20.828 "w_mbytes_per_sec": 0 00:17:20.828 }, 00:17:20.828 "claimed": true, 00:17:20.828 "claim_type": "exclusive_write", 00:17:20.828 "zoned": false, 00:17:20.828 "supported_io_types": { 00:17:20.828 "read": true, 00:17:20.828 "write": true, 00:17:20.828 "unmap": true, 00:17:20.828 "flush": true, 00:17:20.828 "reset": true, 00:17:20.828 "nvme_admin": false, 00:17:20.828 "nvme_io": false, 00:17:20.828 "nvme_io_md": false, 00:17:20.828 "write_zeroes": true, 00:17:20.828 "zcopy": true, 00:17:20.828 "get_zone_info": false, 00:17:20.828 "zone_management": false, 00:17:20.828 "zone_append": false, 00:17:20.828 "compare": false, 00:17:20.828 "compare_and_write": false, 00:17:20.828 "abort": true, 00:17:20.828 "seek_hole": false, 00:17:20.828 "seek_data": false, 00:17:20.828 "copy": true, 00:17:20.828 "nvme_iov_md": false 00:17:20.828 }, 00:17:20.828 "memory_domains": [ 00:17:20.828 { 00:17:20.828 "dma_device_id": "system", 00:17:20.828 "dma_device_type": 1 00:17:20.828 }, 00:17:20.828 { 00:17:20.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.828 "dma_device_type": 2 00:17:20.828 } 00:17:20.828 ], 00:17:20.828 "driver_specific": { 00:17:20.828 "passthru": { 00:17:20.828 "name": "pt1", 00:17:20.828 "base_bdev_name": "malloc1" 00:17:20.828 } 00:17:20.828 } 00:17:20.828 }' 00:17:20.828 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.828 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.828 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:20.828 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.828 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.828 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:20.828 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.828 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.087 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:21.087 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.087 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.087 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:21.087 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.087 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:21.087 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.346 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.346 "name": "pt2", 00:17:21.346 "aliases": [ 00:17:21.346 "00000000-0000-0000-0000-000000000002" 00:17:21.346 ], 00:17:21.346 "product_name": "passthru", 00:17:21.346 "block_size": 512, 00:17:21.346 "num_blocks": 65536, 00:17:21.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.346 "assigned_rate_limits": { 00:17:21.346 "rw_ios_per_sec": 0, 00:17:21.346 "rw_mbytes_per_sec": 0, 00:17:21.346 "r_mbytes_per_sec": 0, 00:17:21.346 "w_mbytes_per_sec": 0 00:17:21.346 }, 00:17:21.346 "claimed": true, 00:17:21.346 "claim_type": "exclusive_write", 00:17:21.346 "zoned": false, 00:17:21.346 "supported_io_types": { 00:17:21.346 "read": true, 00:17:21.346 "write": true, 00:17:21.346 "unmap": true, 00:17:21.346 "flush": true, 00:17:21.346 "reset": true, 00:17:21.346 "nvme_admin": false, 00:17:21.346 "nvme_io": false, 00:17:21.346 "nvme_io_md": false, 00:17:21.346 "write_zeroes": true, 00:17:21.346 "zcopy": true, 00:17:21.346 "get_zone_info": false, 00:17:21.346 "zone_management": false, 00:17:21.346 "zone_append": false, 00:17:21.346 "compare": false, 00:17:21.346 "compare_and_write": false, 00:17:21.346 "abort": true, 00:17:21.346 "seek_hole": false, 00:17:21.346 "seek_data": false, 00:17:21.346 "copy": true, 00:17:21.346 "nvme_iov_md": false 00:17:21.346 }, 00:17:21.346 "memory_domains": [ 00:17:21.346 { 00:17:21.346 "dma_device_id": "system", 00:17:21.346 "dma_device_type": 1 00:17:21.346 }, 00:17:21.346 { 00:17:21.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.346 "dma_device_type": 2 00:17:21.346 } 00:17:21.346 ], 00:17:21.346 "driver_specific": { 00:17:21.346 "passthru": { 00:17:21.346 "name": "pt2", 00:17:21.346 "base_bdev_name": "malloc2" 00:17:21.346 } 00:17:21.346 } 00:17:21.346 }' 00:17:21.346 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.346 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.346 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:21.346 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.346 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.346 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:21.346 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.604 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.605 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:21.605 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.605 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.605 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:21.605 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.605 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:21.605 13:17:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.863 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.863 "name": "pt3", 00:17:21.863 "aliases": [ 00:17:21.863 "00000000-0000-0000-0000-000000000003" 00:17:21.863 ], 00:17:21.863 "product_name": "passthru", 00:17:21.863 "block_size": 512, 00:17:21.863 "num_blocks": 65536, 00:17:21.863 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.863 "assigned_rate_limits": { 00:17:21.863 "rw_ios_per_sec": 0, 00:17:21.863 "rw_mbytes_per_sec": 0, 00:17:21.863 "r_mbytes_per_sec": 0, 00:17:21.863 "w_mbytes_per_sec": 0 00:17:21.863 }, 00:17:21.863 "claimed": true, 00:17:21.863 "claim_type": "exclusive_write", 00:17:21.863 "zoned": false, 00:17:21.863 "supported_io_types": { 00:17:21.863 "read": true, 00:17:21.863 "write": true, 00:17:21.863 "unmap": true, 00:17:21.863 "flush": true, 00:17:21.863 "reset": true, 00:17:21.863 "nvme_admin": false, 00:17:21.863 "nvme_io": false, 00:17:21.863 "nvme_io_md": false, 00:17:21.863 "write_zeroes": true, 00:17:21.863 "zcopy": true, 00:17:21.863 "get_zone_info": false, 00:17:21.863 "zone_management": false, 00:17:21.863 "zone_append": false, 00:17:21.863 "compare": false, 00:17:21.863 "compare_and_write": false, 00:17:21.863 "abort": true, 00:17:21.863 "seek_hole": false, 00:17:21.863 "seek_data": false, 00:17:21.863 "copy": true, 00:17:21.863 "nvme_iov_md": false 00:17:21.863 }, 00:17:21.863 "memory_domains": [ 00:17:21.863 { 00:17:21.863 "dma_device_id": "system", 00:17:21.863 "dma_device_type": 1 00:17:21.863 }, 00:17:21.863 { 00:17:21.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.863 "dma_device_type": 2 00:17:21.863 } 00:17:21.863 ], 00:17:21.863 "driver_specific": { 00:17:21.863 "passthru": { 00:17:21.863 "name": "pt3", 00:17:21.863 "base_bdev_name": "malloc3" 00:17:21.863 } 00:17:21.863 } 00:17:21.863 }' 00:17:21.863 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.863 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.863 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:21.863 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.121 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.121 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:22.121 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.121 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.121 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:22.121 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.121 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.121 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:22.121 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:22.122 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:17:22.380 [2024-07-25 13:17:32.784455] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.380 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=1685fcbe-e59b-44d7-acc5-9cda31131ae2 00:17:22.380 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 1685fcbe-e59b-44d7-acc5-9cda31131ae2 ']' 00:17:22.380 13:17:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:22.638 [2024-07-25 13:17:33.016812] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.638 [2024-07-25 13:17:33.016827] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.638 [2024-07-25 13:17:33.016866] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.638 [2024-07-25 13:17:33.016923] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.638 [2024-07-25 13:17:33.016934] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc0aab0 name raid_bdev1, state offline 00:17:22.638 13:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.639 13:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:17:22.898 13:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:17:22.898 13:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:17:22.898 13:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:22.898 13:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:23.156 13:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:23.156 13:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:23.414 13:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:23.414 13:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:23.674 13:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:23.674 13:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:23.674 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:17:23.674 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:23.674 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:23.674 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:23.674 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:23.674 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:23.674 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:23.932 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:23.932 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:23.932 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:23.932 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:23.932 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:17:23.932 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:23.932 [2024-07-25 13:17:34.320193] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:23.932 [2024-07-25 13:17:34.321456] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:23.932 [2024-07-25 13:17:34.321496] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:23.932 [2024-07-25 13:17:34.321536] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:23.932 [2024-07-25 13:17:34.321572] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:23.932 [2024-07-25 13:17:34.321594] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:23.932 [2024-07-25 13:17:34.321611] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.932 [2024-07-25 13:17:34.321620] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc0aa80 name raid_bdev1, state configuring 00:17:23.932 request: 00:17:23.932 { 00:17:23.933 "name": "raid_bdev1", 00:17:23.933 "raid_level": "raid1", 00:17:23.933 "base_bdevs": [ 00:17:23.933 "malloc1", 00:17:23.933 "malloc2", 00:17:23.933 "malloc3" 00:17:23.933 ], 00:17:23.933 "superblock": false, 00:17:23.933 "method": "bdev_raid_create", 00:17:23.933 "req_id": 1 00:17:23.933 } 00:17:23.933 Got JSON-RPC error response 00:17:23.933 response: 00:17:23.933 { 00:17:23.933 "code": -17, 00:17:23.933 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:23.933 } 00:17:23.933 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:23.933 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:23.933 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:23.933 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:23.933 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.933 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:17:24.191 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:17:24.191 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:17:24.191 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:24.450 [2024-07-25 13:17:34.785371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:24.450 [2024-07-25 13:17:34.785408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.450 [2024-07-25 13:17:34.785424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc0aa80 00:17:24.450 [2024-07-25 13:17:34.785435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.450 [2024-07-25 13:17:34.786789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.450 [2024-07-25 13:17:34.786815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:24.450 [2024-07-25 13:17:34.786869] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:24.450 [2024-07-25 13:17:34.786891] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:24.450 pt1 00:17:24.450 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:24.450 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:24.450 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:24.450 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:24.450 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:24.450 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:24.450 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.450 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.450 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.450 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.450 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.450 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.709 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.709 "name": "raid_bdev1", 00:17:24.709 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:24.709 "strip_size_kb": 0, 00:17:24.709 "state": "configuring", 00:17:24.709 "raid_level": "raid1", 00:17:24.709 "superblock": true, 00:17:24.709 "num_base_bdevs": 3, 00:17:24.709 "num_base_bdevs_discovered": 1, 00:17:24.709 "num_base_bdevs_operational": 3, 00:17:24.709 "base_bdevs_list": [ 00:17:24.709 { 00:17:24.709 "name": "pt1", 00:17:24.709 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:24.709 "is_configured": true, 00:17:24.709 "data_offset": 2048, 00:17:24.709 "data_size": 63488 00:17:24.709 }, 00:17:24.709 { 00:17:24.709 "name": null, 00:17:24.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.709 "is_configured": false, 00:17:24.709 "data_offset": 2048, 00:17:24.709 "data_size": 63488 00:17:24.709 }, 00:17:24.709 { 00:17:24.709 "name": null, 00:17:24.709 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:24.709 "is_configured": false, 00:17:24.709 "data_offset": 2048, 00:17:24.709 "data_size": 63488 00:17:24.709 } 00:17:24.709 ] 00:17:24.709 }' 00:17:24.709 13:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.709 13:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.277 13:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:17:25.278 13:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:25.278 [2024-07-25 13:17:35.763963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:25.278 [2024-07-25 13:17:35.764007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.278 [2024-07-25 13:17:35.764025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc081a0 00:17:25.278 [2024-07-25 13:17:35.764036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.278 [2024-07-25 13:17:35.764344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.278 [2024-07-25 13:17:35.764361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:25.278 [2024-07-25 13:17:35.764415] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:25.278 [2024-07-25 13:17:35.764433] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:25.537 pt2 00:17:25.537 13:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:25.537 [2024-07-25 13:17:35.988565] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:25.537 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:25.537 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:25.537 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:25.537 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:25.537 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:25.537 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:25.537 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.537 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.537 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.537 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.537 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.537 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.796 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.796 "name": "raid_bdev1", 00:17:25.796 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:25.796 "strip_size_kb": 0, 00:17:25.796 "state": "configuring", 00:17:25.796 "raid_level": "raid1", 00:17:25.796 "superblock": true, 00:17:25.796 "num_base_bdevs": 3, 00:17:25.796 "num_base_bdevs_discovered": 1, 00:17:25.796 "num_base_bdevs_operational": 3, 00:17:25.796 "base_bdevs_list": [ 00:17:25.796 { 00:17:25.796 "name": "pt1", 00:17:25.796 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.796 "is_configured": true, 00:17:25.796 "data_offset": 2048, 00:17:25.796 "data_size": 63488 00:17:25.796 }, 00:17:25.796 { 00:17:25.796 "name": null, 00:17:25.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.796 "is_configured": false, 00:17:25.796 "data_offset": 2048, 00:17:25.796 "data_size": 63488 00:17:25.796 }, 00:17:25.796 { 00:17:25.796 "name": null, 00:17:25.796 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:25.796 "is_configured": false, 00:17:25.796 "data_offset": 2048, 00:17:25.796 "data_size": 63488 00:17:25.796 } 00:17:25.796 ] 00:17:25.796 }' 00:17:25.796 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.796 13:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.363 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:17:26.363 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:26.363 13:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:26.622 [2024-07-25 13:17:37.019273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:26.622 [2024-07-25 13:17:37.019317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.622 [2024-07-25 13:17:37.019334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa706c0 00:17:26.622 [2024-07-25 13:17:37.019346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.622 [2024-07-25 13:17:37.019645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.622 [2024-07-25 13:17:37.019662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:26.622 [2024-07-25 13:17:37.019718] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:26.622 [2024-07-25 13:17:37.019735] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:26.622 pt2 00:17:26.622 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:26.622 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:26.622 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:26.881 [2024-07-25 13:17:37.243860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:26.881 [2024-07-25 13:17:37.243890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.881 [2024-07-25 13:17:37.243904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc0d430 00:17:26.881 [2024-07-25 13:17:37.243915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.881 [2024-07-25 13:17:37.244188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.881 [2024-07-25 13:17:37.244204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:26.881 [2024-07-25 13:17:37.244256] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:26.881 [2024-07-25 13:17:37.244273] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:26.881 [2024-07-25 13:17:37.244369] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc0d9f0 00:17:26.881 [2024-07-25 13:17:37.244379] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:26.881 [2024-07-25 13:17:37.244534] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc14290 00:17:26.881 [2024-07-25 13:17:37.244651] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc0d9f0 00:17:26.881 [2024-07-25 13:17:37.244660] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc0d9f0 00:17:26.881 [2024-07-25 13:17:37.244744] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.881 pt3 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.881 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.141 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:27.141 "name": "raid_bdev1", 00:17:27.141 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:27.141 "strip_size_kb": 0, 00:17:27.141 "state": "online", 00:17:27.141 "raid_level": "raid1", 00:17:27.141 "superblock": true, 00:17:27.141 "num_base_bdevs": 3, 00:17:27.141 "num_base_bdevs_discovered": 3, 00:17:27.141 "num_base_bdevs_operational": 3, 00:17:27.141 "base_bdevs_list": [ 00:17:27.141 { 00:17:27.141 "name": "pt1", 00:17:27.141 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:27.141 "is_configured": true, 00:17:27.141 "data_offset": 2048, 00:17:27.141 "data_size": 63488 00:17:27.141 }, 00:17:27.141 { 00:17:27.141 "name": "pt2", 00:17:27.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:27.141 "is_configured": true, 00:17:27.141 "data_offset": 2048, 00:17:27.141 "data_size": 63488 00:17:27.141 }, 00:17:27.141 { 00:17:27.141 "name": "pt3", 00:17:27.141 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:27.141 "is_configured": true, 00:17:27.141 "data_offset": 2048, 00:17:27.141 "data_size": 63488 00:17:27.141 } 00:17:27.141 ] 00:17:27.141 }' 00:17:27.141 13:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:27.141 13:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.709 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:17:27.709 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:27.709 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:27.709 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:27.709 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:27.709 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:27.709 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:27.709 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:27.969 [2024-07-25 13:17:38.258787] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.969 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:27.969 "name": "raid_bdev1", 00:17:27.969 "aliases": [ 00:17:27.969 "1685fcbe-e59b-44d7-acc5-9cda31131ae2" 00:17:27.969 ], 00:17:27.969 "product_name": "Raid Volume", 00:17:27.969 "block_size": 512, 00:17:27.969 "num_blocks": 63488, 00:17:27.969 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:27.969 "assigned_rate_limits": { 00:17:27.969 "rw_ios_per_sec": 0, 00:17:27.969 "rw_mbytes_per_sec": 0, 00:17:27.969 "r_mbytes_per_sec": 0, 00:17:27.969 "w_mbytes_per_sec": 0 00:17:27.969 }, 00:17:27.969 "claimed": false, 00:17:27.969 "zoned": false, 00:17:27.969 "supported_io_types": { 00:17:27.969 "read": true, 00:17:27.969 "write": true, 00:17:27.969 "unmap": false, 00:17:27.969 "flush": false, 00:17:27.969 "reset": true, 00:17:27.969 "nvme_admin": false, 00:17:27.969 "nvme_io": false, 00:17:27.969 "nvme_io_md": false, 00:17:27.969 "write_zeroes": true, 00:17:27.969 "zcopy": false, 00:17:27.969 "get_zone_info": false, 00:17:27.969 "zone_management": false, 00:17:27.969 "zone_append": false, 00:17:27.969 "compare": false, 00:17:27.969 "compare_and_write": false, 00:17:27.969 "abort": false, 00:17:27.969 "seek_hole": false, 00:17:27.969 "seek_data": false, 00:17:27.969 "copy": false, 00:17:27.969 "nvme_iov_md": false 00:17:27.969 }, 00:17:27.969 "memory_domains": [ 00:17:27.969 { 00:17:27.969 "dma_device_id": "system", 00:17:27.969 "dma_device_type": 1 00:17:27.969 }, 00:17:27.969 { 00:17:27.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.969 "dma_device_type": 2 00:17:27.969 }, 00:17:27.969 { 00:17:27.969 "dma_device_id": "system", 00:17:27.969 "dma_device_type": 1 00:17:27.969 }, 00:17:27.969 { 00:17:27.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.969 "dma_device_type": 2 00:17:27.969 }, 00:17:27.969 { 00:17:27.969 "dma_device_id": "system", 00:17:27.969 "dma_device_type": 1 00:17:27.969 }, 00:17:27.969 { 00:17:27.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.969 "dma_device_type": 2 00:17:27.969 } 00:17:27.969 ], 00:17:27.969 "driver_specific": { 00:17:27.969 "raid": { 00:17:27.969 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:27.969 "strip_size_kb": 0, 00:17:27.969 "state": "online", 00:17:27.969 "raid_level": "raid1", 00:17:27.969 "superblock": true, 00:17:27.969 "num_base_bdevs": 3, 00:17:27.969 "num_base_bdevs_discovered": 3, 00:17:27.969 "num_base_bdevs_operational": 3, 00:17:27.969 "base_bdevs_list": [ 00:17:27.969 { 00:17:27.969 "name": "pt1", 00:17:27.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:27.969 "is_configured": true, 00:17:27.969 "data_offset": 2048, 00:17:27.969 "data_size": 63488 00:17:27.969 }, 00:17:27.969 { 00:17:27.969 "name": "pt2", 00:17:27.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:27.969 "is_configured": true, 00:17:27.969 "data_offset": 2048, 00:17:27.969 "data_size": 63488 00:17:27.969 }, 00:17:27.969 { 00:17:27.969 "name": "pt3", 00:17:27.969 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:27.969 "is_configured": true, 00:17:27.969 "data_offset": 2048, 00:17:27.969 "data_size": 63488 00:17:27.969 } 00:17:27.969 ] 00:17:27.969 } 00:17:27.969 } 00:17:27.969 }' 00:17:27.969 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:27.969 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:27.969 pt2 00:17:27.969 pt3' 00:17:27.969 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.969 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:27.969 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:28.229 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:28.229 "name": "pt1", 00:17:28.229 "aliases": [ 00:17:28.229 "00000000-0000-0000-0000-000000000001" 00:17:28.229 ], 00:17:28.229 "product_name": "passthru", 00:17:28.229 "block_size": 512, 00:17:28.229 "num_blocks": 65536, 00:17:28.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:28.229 "assigned_rate_limits": { 00:17:28.229 "rw_ios_per_sec": 0, 00:17:28.229 "rw_mbytes_per_sec": 0, 00:17:28.229 "r_mbytes_per_sec": 0, 00:17:28.229 "w_mbytes_per_sec": 0 00:17:28.229 }, 00:17:28.229 "claimed": true, 00:17:28.229 "claim_type": "exclusive_write", 00:17:28.229 "zoned": false, 00:17:28.229 "supported_io_types": { 00:17:28.229 "read": true, 00:17:28.229 "write": true, 00:17:28.229 "unmap": true, 00:17:28.229 "flush": true, 00:17:28.229 "reset": true, 00:17:28.229 "nvme_admin": false, 00:17:28.229 "nvme_io": false, 00:17:28.229 "nvme_io_md": false, 00:17:28.229 "write_zeroes": true, 00:17:28.229 "zcopy": true, 00:17:28.229 "get_zone_info": false, 00:17:28.229 "zone_management": false, 00:17:28.229 "zone_append": false, 00:17:28.229 "compare": false, 00:17:28.229 "compare_and_write": false, 00:17:28.229 "abort": true, 00:17:28.229 "seek_hole": false, 00:17:28.229 "seek_data": false, 00:17:28.229 "copy": true, 00:17:28.229 "nvme_iov_md": false 00:17:28.229 }, 00:17:28.229 "memory_domains": [ 00:17:28.229 { 00:17:28.229 "dma_device_id": "system", 00:17:28.229 "dma_device_type": 1 00:17:28.229 }, 00:17:28.229 { 00:17:28.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.229 "dma_device_type": 2 00:17:28.229 } 00:17:28.229 ], 00:17:28.229 "driver_specific": { 00:17:28.229 "passthru": { 00:17:28.229 "name": "pt1", 00:17:28.229 "base_bdev_name": "malloc1" 00:17:28.229 } 00:17:28.229 } 00:17:28.229 }' 00:17:28.229 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.229 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.229 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.229 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.229 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.489 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:28.489 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.489 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.489 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.489 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.489 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.489 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.489 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:28.489 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:28.489 13:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:28.748 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:28.748 "name": "pt2", 00:17:28.748 "aliases": [ 00:17:28.748 "00000000-0000-0000-0000-000000000002" 00:17:28.748 ], 00:17:28.748 "product_name": "passthru", 00:17:28.748 "block_size": 512, 00:17:28.748 "num_blocks": 65536, 00:17:28.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:28.748 "assigned_rate_limits": { 00:17:28.748 "rw_ios_per_sec": 0, 00:17:28.748 "rw_mbytes_per_sec": 0, 00:17:28.748 "r_mbytes_per_sec": 0, 00:17:28.748 "w_mbytes_per_sec": 0 00:17:28.748 }, 00:17:28.748 "claimed": true, 00:17:28.748 "claim_type": "exclusive_write", 00:17:28.748 "zoned": false, 00:17:28.748 "supported_io_types": { 00:17:28.748 "read": true, 00:17:28.748 "write": true, 00:17:28.748 "unmap": true, 00:17:28.748 "flush": true, 00:17:28.748 "reset": true, 00:17:28.748 "nvme_admin": false, 00:17:28.748 "nvme_io": false, 00:17:28.748 "nvme_io_md": false, 00:17:28.748 "write_zeroes": true, 00:17:28.748 "zcopy": true, 00:17:28.748 "get_zone_info": false, 00:17:28.748 "zone_management": false, 00:17:28.748 "zone_append": false, 00:17:28.748 "compare": false, 00:17:28.748 "compare_and_write": false, 00:17:28.748 "abort": true, 00:17:28.748 "seek_hole": false, 00:17:28.748 "seek_data": false, 00:17:28.748 "copy": true, 00:17:28.748 "nvme_iov_md": false 00:17:28.748 }, 00:17:28.748 "memory_domains": [ 00:17:28.748 { 00:17:28.748 "dma_device_id": "system", 00:17:28.748 "dma_device_type": 1 00:17:28.748 }, 00:17:28.748 { 00:17:28.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.748 "dma_device_type": 2 00:17:28.748 } 00:17:28.748 ], 00:17:28.748 "driver_specific": { 00:17:28.748 "passthru": { 00:17:28.748 "name": "pt2", 00:17:28.748 "base_bdev_name": "malloc2" 00:17:28.748 } 00:17:28.748 } 00:17:28.748 }' 00:17:28.748 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.748 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.748 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.748 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.007 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.007 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:29.007 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.007 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.007 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:29.007 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.007 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.007 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:29.007 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:29.007 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:29.007 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:29.266 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:29.266 "name": "pt3", 00:17:29.266 "aliases": [ 00:17:29.266 "00000000-0000-0000-0000-000000000003" 00:17:29.266 ], 00:17:29.266 "product_name": "passthru", 00:17:29.266 "block_size": 512, 00:17:29.266 "num_blocks": 65536, 00:17:29.266 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:29.266 "assigned_rate_limits": { 00:17:29.266 "rw_ios_per_sec": 0, 00:17:29.266 "rw_mbytes_per_sec": 0, 00:17:29.266 "r_mbytes_per_sec": 0, 00:17:29.266 "w_mbytes_per_sec": 0 00:17:29.266 }, 00:17:29.266 "claimed": true, 00:17:29.266 "claim_type": "exclusive_write", 00:17:29.266 "zoned": false, 00:17:29.266 "supported_io_types": { 00:17:29.266 "read": true, 00:17:29.266 "write": true, 00:17:29.266 "unmap": true, 00:17:29.266 "flush": true, 00:17:29.266 "reset": true, 00:17:29.266 "nvme_admin": false, 00:17:29.266 "nvme_io": false, 00:17:29.266 "nvme_io_md": false, 00:17:29.266 "write_zeroes": true, 00:17:29.266 "zcopy": true, 00:17:29.266 "get_zone_info": false, 00:17:29.266 "zone_management": false, 00:17:29.266 "zone_append": false, 00:17:29.266 "compare": false, 00:17:29.266 "compare_and_write": false, 00:17:29.266 "abort": true, 00:17:29.266 "seek_hole": false, 00:17:29.266 "seek_data": false, 00:17:29.266 "copy": true, 00:17:29.266 "nvme_iov_md": false 00:17:29.266 }, 00:17:29.266 "memory_domains": [ 00:17:29.266 { 00:17:29.266 "dma_device_id": "system", 00:17:29.266 "dma_device_type": 1 00:17:29.266 }, 00:17:29.266 { 00:17:29.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.266 "dma_device_type": 2 00:17:29.266 } 00:17:29.266 ], 00:17:29.266 "driver_specific": { 00:17:29.266 "passthru": { 00:17:29.266 "name": "pt3", 00:17:29.266 "base_bdev_name": "malloc3" 00:17:29.266 } 00:17:29.266 } 00:17:29.266 }' 00:17:29.266 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.266 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.525 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:29.525 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.525 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.525 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:29.525 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.525 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.525 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:29.525 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.525 13:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.525 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:29.785 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:29.785 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:17:29.785 [2024-07-25 13:17:40.223961] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.785 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 1685fcbe-e59b-44d7-acc5-9cda31131ae2 '!=' 1685fcbe-e59b-44d7-acc5-9cda31131ae2 ']' 00:17:29.785 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:17:29.785 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:29.785 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:29.785 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:30.044 [2024-07-25 13:17:40.464367] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:30.044 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:30.044 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:30.044 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:30.044 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:30.044 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:30.044 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:30.044 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.044 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.044 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.044 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.044 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.044 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.303 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.303 "name": "raid_bdev1", 00:17:30.303 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:30.303 "strip_size_kb": 0, 00:17:30.303 "state": "online", 00:17:30.303 "raid_level": "raid1", 00:17:30.303 "superblock": true, 00:17:30.303 "num_base_bdevs": 3, 00:17:30.303 "num_base_bdevs_discovered": 2, 00:17:30.303 "num_base_bdevs_operational": 2, 00:17:30.303 "base_bdevs_list": [ 00:17:30.303 { 00:17:30.303 "name": null, 00:17:30.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.303 "is_configured": false, 00:17:30.303 "data_offset": 2048, 00:17:30.303 "data_size": 63488 00:17:30.303 }, 00:17:30.303 { 00:17:30.303 "name": "pt2", 00:17:30.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.304 "is_configured": true, 00:17:30.304 "data_offset": 2048, 00:17:30.304 "data_size": 63488 00:17:30.304 }, 00:17:30.304 { 00:17:30.304 "name": "pt3", 00:17:30.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:30.304 "is_configured": true, 00:17:30.304 "data_offset": 2048, 00:17:30.304 "data_size": 63488 00:17:30.304 } 00:17:30.304 ] 00:17:30.304 }' 00:17:30.304 13:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.304 13:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.872 13:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:31.131 [2024-07-25 13:17:41.499060] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.131 [2024-07-25 13:17:41.499081] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.131 [2024-07-25 13:17:41.499123] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.131 [2024-07-25 13:17:41.499177] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.131 [2024-07-25 13:17:41.499188] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc0d9f0 name raid_bdev1, state offline 00:17:31.131 13:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:17:31.131 13:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.425 13:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:17:31.425 13:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:17:31.425 13:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:17:31.425 13:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:31.425 13:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:31.712 13:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:31.712 13:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:31.712 13:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:31.712 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:31.712 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:31.712 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:17:31.712 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:17:31.712 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.971 [2024-07-25 13:17:42.353276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.971 [2024-07-25 13:17:42.353319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.971 [2024-07-25 13:17:42.353334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc09be0 00:17:31.971 [2024-07-25 13:17:42.353346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.971 [2024-07-25 13:17:42.354824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.971 [2024-07-25 13:17:42.354851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.971 [2024-07-25 13:17:42.354909] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:31.971 [2024-07-25 13:17:42.354934] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.971 pt2 00:17:31.971 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:31.971 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:31.971 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:31.971 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:31.971 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:31.971 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:31.971 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.971 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.971 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.971 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:31.971 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.971 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.230 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.230 "name": "raid_bdev1", 00:17:32.230 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:32.230 "strip_size_kb": 0, 00:17:32.230 "state": "configuring", 00:17:32.230 "raid_level": "raid1", 00:17:32.230 "superblock": true, 00:17:32.230 "num_base_bdevs": 3, 00:17:32.230 "num_base_bdevs_discovered": 1, 00:17:32.230 "num_base_bdevs_operational": 2, 00:17:32.230 "base_bdevs_list": [ 00:17:32.230 { 00:17:32.230 "name": null, 00:17:32.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.230 "is_configured": false, 00:17:32.230 "data_offset": 2048, 00:17:32.231 "data_size": 63488 00:17:32.231 }, 00:17:32.231 { 00:17:32.231 "name": "pt2", 00:17:32.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.231 "is_configured": true, 00:17:32.231 "data_offset": 2048, 00:17:32.231 "data_size": 63488 00:17:32.231 }, 00:17:32.231 { 00:17:32.231 "name": null, 00:17:32.231 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:32.231 "is_configured": false, 00:17:32.231 "data_offset": 2048, 00:17:32.231 "data_size": 63488 00:17:32.231 } 00:17:32.231 ] 00:17:32.231 }' 00:17:32.231 13:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.231 13:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.799 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:17:32.799 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:17:32.799 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:32.799 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:33.058 [2024-07-25 13:17:43.379982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:33.058 [2024-07-25 13:17:43.380025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.058 [2024-07-25 13:17:43.380042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc11950 00:17:33.058 [2024-07-25 13:17:43.380053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.058 [2024-07-25 13:17:43.380360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.058 [2024-07-25 13:17:43.380377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:33.058 [2024-07-25 13:17:43.380429] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:33.059 [2024-07-25 13:17:43.380447] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:33.059 [2024-07-25 13:17:43.380533] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc0d970 00:17:33.059 [2024-07-25 13:17:43.380543] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:33.059 [2024-07-25 13:17:43.380697] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa6ef10 00:17:33.059 [2024-07-25 13:17:43.380811] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc0d970 00:17:33.059 [2024-07-25 13:17:43.380820] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc0d970 00:17:33.059 [2024-07-25 13:17:43.380904] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.059 pt3 00:17:33.059 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.059 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:33.059 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:33.059 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:33.059 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:33.059 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:33.059 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.059 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.059 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.059 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.059 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.059 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.318 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.318 "name": "raid_bdev1", 00:17:33.318 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:33.318 "strip_size_kb": 0, 00:17:33.318 "state": "online", 00:17:33.318 "raid_level": "raid1", 00:17:33.318 "superblock": true, 00:17:33.318 "num_base_bdevs": 3, 00:17:33.318 "num_base_bdevs_discovered": 2, 00:17:33.318 "num_base_bdevs_operational": 2, 00:17:33.318 "base_bdevs_list": [ 00:17:33.318 { 00:17:33.318 "name": null, 00:17:33.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.318 "is_configured": false, 00:17:33.318 "data_offset": 2048, 00:17:33.318 "data_size": 63488 00:17:33.318 }, 00:17:33.318 { 00:17:33.318 "name": "pt2", 00:17:33.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.318 "is_configured": true, 00:17:33.318 "data_offset": 2048, 00:17:33.318 "data_size": 63488 00:17:33.318 }, 00:17:33.318 { 00:17:33.318 "name": "pt3", 00:17:33.318 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:33.318 "is_configured": true, 00:17:33.318 "data_offset": 2048, 00:17:33.318 "data_size": 63488 00:17:33.318 } 00:17:33.318 ] 00:17:33.318 }' 00:17:33.318 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.318 13:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.886 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:33.886 [2024-07-25 13:17:44.370572] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.886 [2024-07-25 13:17:44.370595] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.886 [2024-07-25 13:17:44.370640] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.886 [2024-07-25 13:17:44.370690] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.886 [2024-07-25 13:17:44.370700] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc0d970 name raid_bdev1, state offline 00:17:34.146 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.146 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:17:34.146 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:17:34.146 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:17:34.146 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:17:34.146 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:17:34.146 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:34.405 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:34.664 [2024-07-25 13:17:45.048337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:34.664 [2024-07-25 13:17:45.048376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.664 [2024-07-25 13:17:45.048393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc097e0 00:17:34.664 [2024-07-25 13:17:45.048404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.664 [2024-07-25 13:17:45.049895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.664 [2024-07-25 13:17:45.049923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:34.664 [2024-07-25 13:17:45.049978] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:34.664 [2024-07-25 13:17:45.050000] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:34.664 [2024-07-25 13:17:45.050086] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:34.665 [2024-07-25 13:17:45.050098] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.665 [2024-07-25 13:17:45.050110] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc13020 name raid_bdev1, state configuring 00:17:34.665 [2024-07-25 13:17:45.050131] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.665 pt1 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.665 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.924 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.924 "name": "raid_bdev1", 00:17:34.924 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:34.924 "strip_size_kb": 0, 00:17:34.924 "state": "configuring", 00:17:34.924 "raid_level": "raid1", 00:17:34.924 "superblock": true, 00:17:34.924 "num_base_bdevs": 3, 00:17:34.924 "num_base_bdevs_discovered": 1, 00:17:34.924 "num_base_bdevs_operational": 2, 00:17:34.924 "base_bdevs_list": [ 00:17:34.924 { 00:17:34.924 "name": null, 00:17:34.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.924 "is_configured": false, 00:17:34.924 "data_offset": 2048, 00:17:34.924 "data_size": 63488 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "name": "pt2", 00:17:34.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.924 "is_configured": true, 00:17:34.924 "data_offset": 2048, 00:17:34.924 "data_size": 63488 00:17:34.924 }, 00:17:34.924 { 00:17:34.924 "name": null, 00:17:34.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:34.924 "is_configured": false, 00:17:34.924 "data_offset": 2048, 00:17:34.924 "data_size": 63488 00:17:34.924 } 00:17:34.924 ] 00:17:34.924 }' 00:17:34.924 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.924 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.492 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:17:35.492 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:35.752 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:17:35.752 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:36.011 [2024-07-25 13:17:46.343743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:36.011 [2024-07-25 13:17:46.343790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.011 [2024-07-25 13:17:46.343806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc08b00 00:17:36.011 [2024-07-25 13:17:46.343818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.011 [2024-07-25 13:17:46.344120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.011 [2024-07-25 13:17:46.344136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:36.011 [2024-07-25 13:17:46.344199] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:36.011 [2024-07-25 13:17:46.344217] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:36.011 [2024-07-25 13:17:46.344303] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc0ba60 00:17:36.011 [2024-07-25 13:17:46.344313] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:36.011 [2024-07-25 13:17:46.344469] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc0c180 00:17:36.011 [2024-07-25 13:17:46.344585] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc0ba60 00:17:36.011 [2024-07-25 13:17:46.344594] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc0ba60 00:17:36.011 [2024-07-25 13:17:46.344680] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.011 pt3 00:17:36.011 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:36.011 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:36.011 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:36.011 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:36.011 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:36.011 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:36.011 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.011 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.011 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.011 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.011 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.011 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.271 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.271 "name": "raid_bdev1", 00:17:36.271 "uuid": "1685fcbe-e59b-44d7-acc5-9cda31131ae2", 00:17:36.271 "strip_size_kb": 0, 00:17:36.271 "state": "online", 00:17:36.271 "raid_level": "raid1", 00:17:36.271 "superblock": true, 00:17:36.271 "num_base_bdevs": 3, 00:17:36.271 "num_base_bdevs_discovered": 2, 00:17:36.271 "num_base_bdevs_operational": 2, 00:17:36.271 "base_bdevs_list": [ 00:17:36.271 { 00:17:36.271 "name": null, 00:17:36.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.271 "is_configured": false, 00:17:36.271 "data_offset": 2048, 00:17:36.271 "data_size": 63488 00:17:36.271 }, 00:17:36.271 { 00:17:36.271 "name": "pt2", 00:17:36.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.271 "is_configured": true, 00:17:36.271 "data_offset": 2048, 00:17:36.271 "data_size": 63488 00:17:36.271 }, 00:17:36.271 { 00:17:36.271 "name": "pt3", 00:17:36.271 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:36.271 "is_configured": true, 00:17:36.271 "data_offset": 2048, 00:17:36.271 "data_size": 63488 00:17:36.271 } 00:17:36.271 ] 00:17:36.271 }' 00:17:36.271 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.271 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.839 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:36.839 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:37.098 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:17:37.098 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:37.098 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:17:37.357 [2024-07-25 13:17:47.607307] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.357 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 1685fcbe-e59b-44d7-acc5-9cda31131ae2 '!=' 1685fcbe-e59b-44d7-acc5-9cda31131ae2 ']' 00:17:37.358 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 893000 00:17:37.358 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 893000 ']' 00:17:37.358 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 893000 00:17:37.358 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:37.358 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:37.358 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 893000 00:17:37.358 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:37.358 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:37.358 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 893000' 00:17:37.358 killing process with pid 893000 00:17:37.358 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 893000 00:17:37.358 [2024-07-25 13:17:47.687240] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.358 [2024-07-25 13:17:47.687297] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.358 [2024-07-25 13:17:47.687346] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.358 [2024-07-25 13:17:47.687357] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc0ba60 name raid_bdev1, state offline 00:17:37.358 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 893000 00:17:37.358 [2024-07-25 13:17:47.712371] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.617 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:17:37.617 00:17:37.617 real 0m20.634s 00:17:37.617 user 0m37.637s 00:17:37.617 sys 0m3.771s 00:17:37.617 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.617 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.617 ************************************ 00:17:37.617 END TEST raid_superblock_test 00:17:37.617 ************************************ 00:17:37.618 13:17:47 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:17:37.618 13:17:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:37.618 13:17:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:37.618 13:17:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.618 ************************************ 00:17:37.618 START TEST raid_read_error_test 00:17:37.618 ************************************ 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:17:37.618 13:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:17:37.618 13:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.Orggu1Jt3U 00:17:37.618 13:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=896998 00:17:37.618 13:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 896998 /var/tmp/spdk-raid.sock 00:17:37.618 13:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:37.618 13:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 896998 ']' 00:17:37.618 13:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:37.618 13:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.618 13:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:37.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:37.618 13:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.618 13:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.618 [2024-07-25 13:17:48.060073] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:17:37.618 [2024-07-25 13:17:48.060130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid896998 ] 00:17:37.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:37.878 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:37.878 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:37.878 [2024-07-25 13:17:48.191441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.878 [2024-07-25 13:17:48.278304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.878 [2024-07-25 13:17:48.338117] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.878 [2024-07-25 13:17:48.338163] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.815 13:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.815 13:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:38.815 13:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:38.815 13:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:38.815 BaseBdev1_malloc 00:17:38.815 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:39.074 true 00:17:39.074 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:39.334 [2024-07-25 13:17:49.616033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:39.334 [2024-07-25 13:17:49.616073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.334 [2024-07-25 13:17:49.616092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd981d0 00:17:39.334 [2024-07-25 13:17:49.616103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.334 [2024-07-25 13:17:49.617738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.334 [2024-07-25 13:17:49.617765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:39.334 BaseBdev1 00:17:39.334 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:39.334 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:39.593 BaseBdev2_malloc 00:17:39.593 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:39.593 true 00:17:39.852 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:39.852 [2024-07-25 13:17:50.302189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:39.852 [2024-07-25 13:17:50.302226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.852 [2024-07-25 13:17:50.302243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd9b710 00:17:39.852 [2024-07-25 13:17:50.302255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.852 [2024-07-25 13:17:50.303550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.852 [2024-07-25 13:17:50.303580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:39.852 BaseBdev2 00:17:39.852 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:39.852 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:40.111 BaseBdev3_malloc 00:17:40.111 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:40.371 true 00:17:40.371 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:40.630 [2024-07-25 13:17:50.984157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:40.630 [2024-07-25 13:17:50.984196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.630 [2024-07-25 13:17:50.984216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd9dde0 00:17:40.630 [2024-07-25 13:17:50.984228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.630 [2024-07-25 13:17:50.985600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.630 [2024-07-25 13:17:50.985627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:40.630 BaseBdev3 00:17:40.630 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:40.889 [2024-07-25 13:17:51.208765] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.889 [2024-07-25 13:17:51.209945] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:40.889 [2024-07-25 13:17:51.210005] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:40.889 [2024-07-25 13:17:51.210189] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd9f780 00:17:40.889 [2024-07-25 13:17:51.210200] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:40.889 [2024-07-25 13:17:51.210381] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xda42c0 00:17:40.889 [2024-07-25 13:17:51.210518] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd9f780 00:17:40.889 [2024-07-25 13:17:51.210528] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd9f780 00:17:40.889 [2024-07-25 13:17:51.210637] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.889 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:40.889 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:40.889 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:40.889 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:40.889 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:40.889 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:40.889 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.889 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.889 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.889 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.889 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.889 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.147 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.147 "name": "raid_bdev1", 00:17:41.147 "uuid": "aaf9cace-8d00-4717-8078-38faf11a4e9a", 00:17:41.147 "strip_size_kb": 0, 00:17:41.147 "state": "online", 00:17:41.147 "raid_level": "raid1", 00:17:41.147 "superblock": true, 00:17:41.147 "num_base_bdevs": 3, 00:17:41.147 "num_base_bdevs_discovered": 3, 00:17:41.147 "num_base_bdevs_operational": 3, 00:17:41.147 "base_bdevs_list": [ 00:17:41.147 { 00:17:41.147 "name": "BaseBdev1", 00:17:41.147 "uuid": "e6ff4f8c-e2fa-5d3e-908d-a52dde362e78", 00:17:41.147 "is_configured": true, 00:17:41.147 "data_offset": 2048, 00:17:41.147 "data_size": 63488 00:17:41.147 }, 00:17:41.147 { 00:17:41.147 "name": "BaseBdev2", 00:17:41.147 "uuid": "655dc1c4-b2fc-5c94-b371-a223ec100280", 00:17:41.147 "is_configured": true, 00:17:41.147 "data_offset": 2048, 00:17:41.147 "data_size": 63488 00:17:41.147 }, 00:17:41.147 { 00:17:41.147 "name": "BaseBdev3", 00:17:41.147 "uuid": "867dbc1d-b118-514c-b2fc-4a0b775ab63c", 00:17:41.147 "is_configured": true, 00:17:41.147 "data_offset": 2048, 00:17:41.147 "data_size": 63488 00:17:41.147 } 00:17:41.147 ] 00:17:41.147 }' 00:17:41.147 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.147 13:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.714 13:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:17:41.714 13:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:41.714 [2024-07-25 13:17:52.143472] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xda1270 00:17:42.650 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.910 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.168 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.168 "name": "raid_bdev1", 00:17:43.168 "uuid": "aaf9cace-8d00-4717-8078-38faf11a4e9a", 00:17:43.168 "strip_size_kb": 0, 00:17:43.168 "state": "online", 00:17:43.168 "raid_level": "raid1", 00:17:43.168 "superblock": true, 00:17:43.168 "num_base_bdevs": 3, 00:17:43.168 "num_base_bdevs_discovered": 3, 00:17:43.168 "num_base_bdevs_operational": 3, 00:17:43.168 "base_bdevs_list": [ 00:17:43.168 { 00:17:43.168 "name": "BaseBdev1", 00:17:43.168 "uuid": "e6ff4f8c-e2fa-5d3e-908d-a52dde362e78", 00:17:43.168 "is_configured": true, 00:17:43.168 "data_offset": 2048, 00:17:43.168 "data_size": 63488 00:17:43.168 }, 00:17:43.168 { 00:17:43.168 "name": "BaseBdev2", 00:17:43.168 "uuid": "655dc1c4-b2fc-5c94-b371-a223ec100280", 00:17:43.168 "is_configured": true, 00:17:43.168 "data_offset": 2048, 00:17:43.168 "data_size": 63488 00:17:43.168 }, 00:17:43.168 { 00:17:43.168 "name": "BaseBdev3", 00:17:43.168 "uuid": "867dbc1d-b118-514c-b2fc-4a0b775ab63c", 00:17:43.168 "is_configured": true, 00:17:43.168 "data_offset": 2048, 00:17:43.168 "data_size": 63488 00:17:43.168 } 00:17:43.168 ] 00:17:43.168 }' 00:17:43.168 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.168 13:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.735 13:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:43.995 [2024-07-25 13:17:54.287320] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.995 [2024-07-25 13:17:54.287352] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.995 [2024-07-25 13:17:54.290259] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.995 [2024-07-25 13:17:54.290290] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.995 [2024-07-25 13:17:54.290375] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.995 [2024-07-25 13:17:54.290385] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd9f780 name raid_bdev1, state offline 00:17:43.995 0 00:17:43.995 13:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 896998 00:17:43.995 13:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 896998 ']' 00:17:43.995 13:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 896998 00:17:43.995 13:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:17:43.995 13:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.995 13:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 896998 00:17:43.995 13:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.995 13:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.995 13:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 896998' 00:17:43.995 killing process with pid 896998 00:17:43.995 13:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 896998 00:17:43.995 [2024-07-25 13:17:54.345510] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.995 13:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 896998 00:17:43.995 [2024-07-25 13:17:54.364794] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.254 13:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:17:44.254 13:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:17:44.254 13:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.Orggu1Jt3U 00:17:44.254 13:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:17:44.254 13:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:17:44.254 13:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:44.254 13:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:44.254 13:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:44.254 00:17:44.254 real 0m6.582s 00:17:44.254 user 0m10.321s 00:17:44.254 sys 0m1.193s 00:17:44.254 13:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.254 13:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.254 ************************************ 00:17:44.254 END TEST raid_read_error_test 00:17:44.254 ************************************ 00:17:44.254 13:17:54 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:17:44.254 13:17:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:44.254 13:17:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:44.254 13:17:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:44.254 ************************************ 00:17:44.254 START TEST raid_write_error_test 00:17:44.254 ************************************ 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.5leeim5pAb 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=898301 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 898301 /var/tmp/spdk-raid.sock 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 898301 ']' 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:44.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.254 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.254 [2024-07-25 13:17:54.710635] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:17:44.254 [2024-07-25 13:17:54.710691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898301 ] 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:44.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.513 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:44.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.514 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:44.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.514 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:44.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.514 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:44.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.514 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:44.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.514 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:44.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.514 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:44.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.514 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:44.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.514 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:44.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.514 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:44.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:44.514 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:44.514 [2024-07-25 13:17:54.841043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.514 [2024-07-25 13:17:54.927096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.514 [2024-07-25 13:17:54.991117] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.514 [2024-07-25 13:17:54.991161] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.483 13:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.483 13:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:45.483 13:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:45.483 13:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:45.483 BaseBdev1_malloc 00:17:45.483 13:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:45.741 true 00:17:45.741 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:46.001 [2024-07-25 13:17:56.269596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:46.001 [2024-07-25 13:17:56.269635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.001 [2024-07-25 13:17:56.269653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10591d0 00:17:46.001 [2024-07-25 13:17:56.269665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.001 [2024-07-25 13:17:56.271234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.001 [2024-07-25 13:17:56.271261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:46.001 BaseBdev1 00:17:46.001 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:46.001 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:46.259 BaseBdev2_malloc 00:17:46.259 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:46.259 true 00:17:46.259 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:46.518 [2024-07-25 13:17:56.931518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:46.518 [2024-07-25 13:17:56.931556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.518 [2024-07-25 13:17:56.931573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105c710 00:17:46.518 [2024-07-25 13:17:56.931585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.518 [2024-07-25 13:17:56.932960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.518 [2024-07-25 13:17:56.932985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:46.518 BaseBdev2 00:17:46.518 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:46.518 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:46.777 BaseBdev3_malloc 00:17:46.777 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:47.035 true 00:17:47.035 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:47.294 [2024-07-25 13:17:57.601560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:47.294 [2024-07-25 13:17:57.601600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.294 [2024-07-25 13:17:57.601620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105ede0 00:17:47.294 [2024-07-25 13:17:57.601631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.294 [2024-07-25 13:17:57.603021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.294 [2024-07-25 13:17:57.603047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:47.294 BaseBdev3 00:17:47.294 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:47.553 [2024-07-25 13:17:57.814151] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.553 [2024-07-25 13:17:57.815278] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:47.553 [2024-07-25 13:17:57.815338] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.553 [2024-07-25 13:17:57.815513] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1060780 00:17:47.553 [2024-07-25 13:17:57.815523] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:47.553 [2024-07-25 13:17:57.815697] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10652c0 00:17:47.553 [2024-07-25 13:17:57.815830] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1060780 00:17:47.553 [2024-07-25 13:17:57.815839] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1060780 00:17:47.553 [2024-07-25 13:17:57.815945] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.553 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:47.553 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:47.553 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:47.553 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:47.553 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:47.553 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:47.553 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.553 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.553 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.553 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.553 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.553 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.812 13:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.812 "name": "raid_bdev1", 00:17:47.812 "uuid": "c8ee4130-cf28-4423-8904-b7fc8ee99109", 00:17:47.812 "strip_size_kb": 0, 00:17:47.812 "state": "online", 00:17:47.812 "raid_level": "raid1", 00:17:47.812 "superblock": true, 00:17:47.812 "num_base_bdevs": 3, 00:17:47.812 "num_base_bdevs_discovered": 3, 00:17:47.812 "num_base_bdevs_operational": 3, 00:17:47.812 "base_bdevs_list": [ 00:17:47.812 { 00:17:47.812 "name": "BaseBdev1", 00:17:47.812 "uuid": "bc03fd0c-0463-5839-b3c6-84cdef5ddee5", 00:17:47.812 "is_configured": true, 00:17:47.812 "data_offset": 2048, 00:17:47.812 "data_size": 63488 00:17:47.812 }, 00:17:47.812 { 00:17:47.812 "name": "BaseBdev2", 00:17:47.812 "uuid": "835d7442-0f7d-5862-9cf6-b0d5938beb99", 00:17:47.812 "is_configured": true, 00:17:47.812 "data_offset": 2048, 00:17:47.812 "data_size": 63488 00:17:47.812 }, 00:17:47.812 { 00:17:47.812 "name": "BaseBdev3", 00:17:47.812 "uuid": "0366f982-1fdd-564b-a9b5-acbbd5259182", 00:17:47.812 "is_configured": true, 00:17:47.812 "data_offset": 2048, 00:17:47.812 "data_size": 63488 00:17:47.812 } 00:17:47.812 ] 00:17:47.812 }' 00:17:47.812 13:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.812 13:17:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.380 13:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:17:48.380 13:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:48.380 [2024-07-25 13:17:58.720767] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1062270 00:17:49.318 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:49.578 [2024-07-25 13:17:59.831547] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:49.578 [2024-07-25 13:17:59.831600] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:49.578 [2024-07-25 13:17:59.831785] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1062270 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=2 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.578 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.579 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.579 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.579 13:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.838 13:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.838 "name": "raid_bdev1", 00:17:49.838 "uuid": "c8ee4130-cf28-4423-8904-b7fc8ee99109", 00:17:49.838 "strip_size_kb": 0, 00:17:49.838 "state": "online", 00:17:49.838 "raid_level": "raid1", 00:17:49.838 "superblock": true, 00:17:49.838 "num_base_bdevs": 3, 00:17:49.838 "num_base_bdevs_discovered": 2, 00:17:49.838 "num_base_bdevs_operational": 2, 00:17:49.838 "base_bdevs_list": [ 00:17:49.838 { 00:17:49.838 "name": null, 00:17:49.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.838 "is_configured": false, 00:17:49.838 "data_offset": 2048, 00:17:49.838 "data_size": 63488 00:17:49.838 }, 00:17:49.838 { 00:17:49.838 "name": "BaseBdev2", 00:17:49.838 "uuid": "835d7442-0f7d-5862-9cf6-b0d5938beb99", 00:17:49.838 "is_configured": true, 00:17:49.838 "data_offset": 2048, 00:17:49.838 "data_size": 63488 00:17:49.838 }, 00:17:49.838 { 00:17:49.838 "name": "BaseBdev3", 00:17:49.838 "uuid": "0366f982-1fdd-564b-a9b5-acbbd5259182", 00:17:49.838 "is_configured": true, 00:17:49.838 "data_offset": 2048, 00:17:49.838 "data_size": 63488 00:17:49.838 } 00:17:49.838 ] 00:17:49.838 }' 00:17:49.838 13:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.838 13:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.407 13:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:50.407 [2024-07-25 13:18:00.861324] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.407 [2024-07-25 13:18:00.861359] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.407 [2024-07-25 13:18:00.864285] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.407 [2024-07-25 13:18:00.864314] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.407 [2024-07-25 13:18:00.864384] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.407 [2024-07-25 13:18:00.864394] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1060780 name raid_bdev1, state offline 00:17:50.407 0 00:17:50.407 13:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 898301 00:17:50.407 13:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 898301 ']' 00:17:50.407 13:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 898301 00:17:50.407 13:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:17:50.407 13:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:50.407 13:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 898301 00:17:50.667 13:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:50.667 13:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:50.667 13:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 898301' 00:17:50.667 killing process with pid 898301 00:17:50.667 13:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 898301 00:17:50.667 [2024-07-25 13:18:00.919990] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:50.667 13:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 898301 00:17:50.667 [2024-07-25 13:18:00.938016] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:50.667 13:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.5leeim5pAb 00:17:50.667 13:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:17:50.667 13:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:17:50.667 13:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:17:50.667 13:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:17:50.667 13:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:50.667 13:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:50.667 13:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:50.667 00:17:50.667 real 0m6.505s 00:17:50.667 user 0m10.289s 00:17:50.667 sys 0m1.103s 00:17:50.667 13:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.667 13:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.667 ************************************ 00:17:50.667 END TEST raid_write_error_test 00:17:50.667 ************************************ 00:17:50.927 13:18:01 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:17:50.927 13:18:01 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:17:50.927 13:18:01 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:50.927 13:18:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:50.927 13:18:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.927 13:18:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.927 ************************************ 00:17:50.927 START TEST raid_state_function_test 00:17:50.927 ************************************ 00:17:50.927 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:17:50.927 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:50.927 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:50.927 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:50.927 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:50.927 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:50.927 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:50.927 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:50.927 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:50.927 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:50.927 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:50.927 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=899543 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 899543' 00:17:50.928 Process raid pid: 899543 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 899543 /var/tmp/spdk-raid.sock 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 899543 ']' 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:50.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.928 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.928 [2024-07-25 13:18:01.283583] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:17:50.928 [2024-07-25 13:18:01.283642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:50.928 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.928 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:51.188 [2024-07-25 13:18:01.416300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.188 [2024-07-25 13:18:01.503131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.188 [2024-07-25 13:18:01.564690] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.188 [2024-07-25 13:18:01.564724] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.756 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.756 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:51.757 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:52.016 [2024-07-25 13:18:02.382552] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:52.016 [2024-07-25 13:18:02.382591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:52.016 [2024-07-25 13:18:02.382601] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.016 [2024-07-25 13:18:02.382612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.016 [2024-07-25 13:18:02.382620] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:52.016 [2024-07-25 13:18:02.382630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:52.016 [2024-07-25 13:18:02.382638] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:52.016 [2024-07-25 13:18:02.382651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:52.016 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:52.016 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:52.016 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.016 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:52.016 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:52.016 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:52.016 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.016 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.016 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.016 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.016 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.016 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.276 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.276 "name": "Existed_Raid", 00:17:52.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.276 "strip_size_kb": 64, 00:17:52.276 "state": "configuring", 00:17:52.276 "raid_level": "raid0", 00:17:52.276 "superblock": false, 00:17:52.276 "num_base_bdevs": 4, 00:17:52.276 "num_base_bdevs_discovered": 0, 00:17:52.276 "num_base_bdevs_operational": 4, 00:17:52.276 "base_bdevs_list": [ 00:17:52.276 { 00:17:52.276 "name": "BaseBdev1", 00:17:52.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.276 "is_configured": false, 00:17:52.276 "data_offset": 0, 00:17:52.276 "data_size": 0 00:17:52.276 }, 00:17:52.276 { 00:17:52.276 "name": "BaseBdev2", 00:17:52.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.276 "is_configured": false, 00:17:52.276 "data_offset": 0, 00:17:52.276 "data_size": 0 00:17:52.276 }, 00:17:52.276 { 00:17:52.276 "name": "BaseBdev3", 00:17:52.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.276 "is_configured": false, 00:17:52.276 "data_offset": 0, 00:17:52.276 "data_size": 0 00:17:52.276 }, 00:17:52.276 { 00:17:52.276 "name": "BaseBdev4", 00:17:52.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.276 "is_configured": false, 00:17:52.276 "data_offset": 0, 00:17:52.276 "data_size": 0 00:17:52.276 } 00:17:52.276 ] 00:17:52.276 }' 00:17:52.276 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.276 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.846 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:53.105 [2024-07-25 13:18:03.409125] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.105 [2024-07-25 13:18:03.409161] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c3ef60 name Existed_Raid, state configuring 00:17:53.105 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:53.364 [2024-07-25 13:18:03.637744] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.364 [2024-07-25 13:18:03.637772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.364 [2024-07-25 13:18:03.637780] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.364 [2024-07-25 13:18:03.637791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.364 [2024-07-25 13:18:03.637799] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:53.364 [2024-07-25 13:18:03.637810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:53.364 [2024-07-25 13:18:03.637818] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:53.365 [2024-07-25 13:18:03.637828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:53.365 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:53.624 [2024-07-25 13:18:03.875785] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.624 BaseBdev1 00:17:53.624 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:53.624 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:53.624 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:53.624 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:53.624 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:53.624 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:53.624 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:53.883 [ 00:17:53.883 { 00:17:53.883 "name": "BaseBdev1", 00:17:53.883 "aliases": [ 00:17:53.883 "44e56dd5-3452-421a-9b26-2a39c17f871f" 00:17:53.883 ], 00:17:53.883 "product_name": "Malloc disk", 00:17:53.883 "block_size": 512, 00:17:53.883 "num_blocks": 65536, 00:17:53.883 "uuid": "44e56dd5-3452-421a-9b26-2a39c17f871f", 00:17:53.883 "assigned_rate_limits": { 00:17:53.883 "rw_ios_per_sec": 0, 00:17:53.883 "rw_mbytes_per_sec": 0, 00:17:53.883 "r_mbytes_per_sec": 0, 00:17:53.883 "w_mbytes_per_sec": 0 00:17:53.883 }, 00:17:53.883 "claimed": true, 00:17:53.883 "claim_type": "exclusive_write", 00:17:53.883 "zoned": false, 00:17:53.883 "supported_io_types": { 00:17:53.883 "read": true, 00:17:53.883 "write": true, 00:17:53.883 "unmap": true, 00:17:53.883 "flush": true, 00:17:53.883 "reset": true, 00:17:53.883 "nvme_admin": false, 00:17:53.883 "nvme_io": false, 00:17:53.883 "nvme_io_md": false, 00:17:53.883 "write_zeroes": true, 00:17:53.883 "zcopy": true, 00:17:53.883 "get_zone_info": false, 00:17:53.883 "zone_management": false, 00:17:53.883 "zone_append": false, 00:17:53.883 "compare": false, 00:17:53.883 "compare_and_write": false, 00:17:53.883 "abort": true, 00:17:53.883 "seek_hole": false, 00:17:53.883 "seek_data": false, 00:17:53.883 "copy": true, 00:17:53.883 "nvme_iov_md": false 00:17:53.883 }, 00:17:53.883 "memory_domains": [ 00:17:53.883 { 00:17:53.883 "dma_device_id": "system", 00:17:53.883 "dma_device_type": 1 00:17:53.883 }, 00:17:53.883 { 00:17:53.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.883 "dma_device_type": 2 00:17:53.883 } 00:17:53.883 ], 00:17:53.883 "driver_specific": {} 00:17:53.883 } 00:17:53.883 ] 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.883 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.142 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.142 "name": "Existed_Raid", 00:17:54.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.142 "strip_size_kb": 64, 00:17:54.142 "state": "configuring", 00:17:54.142 "raid_level": "raid0", 00:17:54.142 "superblock": false, 00:17:54.142 "num_base_bdevs": 4, 00:17:54.142 "num_base_bdevs_discovered": 1, 00:17:54.142 "num_base_bdevs_operational": 4, 00:17:54.142 "base_bdevs_list": [ 00:17:54.142 { 00:17:54.142 "name": "BaseBdev1", 00:17:54.142 "uuid": "44e56dd5-3452-421a-9b26-2a39c17f871f", 00:17:54.142 "is_configured": true, 00:17:54.142 "data_offset": 0, 00:17:54.142 "data_size": 65536 00:17:54.142 }, 00:17:54.142 { 00:17:54.142 "name": "BaseBdev2", 00:17:54.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.142 "is_configured": false, 00:17:54.142 "data_offset": 0, 00:17:54.142 "data_size": 0 00:17:54.142 }, 00:17:54.142 { 00:17:54.142 "name": "BaseBdev3", 00:17:54.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.142 "is_configured": false, 00:17:54.142 "data_offset": 0, 00:17:54.142 "data_size": 0 00:17:54.142 }, 00:17:54.142 { 00:17:54.142 "name": "BaseBdev4", 00:17:54.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.142 "is_configured": false, 00:17:54.142 "data_offset": 0, 00:17:54.142 "data_size": 0 00:17:54.142 } 00:17:54.142 ] 00:17:54.142 }' 00:17:54.142 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.142 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.709 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:54.968 [2024-07-25 13:18:05.399942] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.968 [2024-07-25 13:18:05.399977] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c3e7d0 name Existed_Raid, state configuring 00:17:54.968 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:55.227 [2024-07-25 13:18:05.632589] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.228 [2024-07-25 13:18:05.633962] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.228 [2024-07-25 13:18:05.633995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.228 [2024-07-25 13:18:05.634004] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:55.228 [2024-07-25 13:18:05.634015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:55.228 [2024-07-25 13:18:05.634023] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:55.228 [2024-07-25 13:18:05.634034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.228 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.488 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.488 "name": "Existed_Raid", 00:17:55.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.488 "strip_size_kb": 64, 00:17:55.488 "state": "configuring", 00:17:55.488 "raid_level": "raid0", 00:17:55.488 "superblock": false, 00:17:55.488 "num_base_bdevs": 4, 00:17:55.488 "num_base_bdevs_discovered": 1, 00:17:55.488 "num_base_bdevs_operational": 4, 00:17:55.488 "base_bdevs_list": [ 00:17:55.488 { 00:17:55.488 "name": "BaseBdev1", 00:17:55.488 "uuid": "44e56dd5-3452-421a-9b26-2a39c17f871f", 00:17:55.488 "is_configured": true, 00:17:55.488 "data_offset": 0, 00:17:55.488 "data_size": 65536 00:17:55.488 }, 00:17:55.488 { 00:17:55.488 "name": "BaseBdev2", 00:17:55.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.488 "is_configured": false, 00:17:55.488 "data_offset": 0, 00:17:55.488 "data_size": 0 00:17:55.488 }, 00:17:55.488 { 00:17:55.488 "name": "BaseBdev3", 00:17:55.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.488 "is_configured": false, 00:17:55.488 "data_offset": 0, 00:17:55.488 "data_size": 0 00:17:55.488 }, 00:17:55.488 { 00:17:55.488 "name": "BaseBdev4", 00:17:55.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.488 "is_configured": false, 00:17:55.488 "data_offset": 0, 00:17:55.488 "data_size": 0 00:17:55.488 } 00:17:55.488 ] 00:17:55.488 }' 00:17:55.488 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.488 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.056 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:56.316 [2024-07-25 13:18:06.698637] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.316 BaseBdev2 00:17:56.316 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:56.316 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:56.316 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:56.316 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:56.316 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:56.316 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:56.316 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.575 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:56.834 [ 00:17:56.834 { 00:17:56.834 "name": "BaseBdev2", 00:17:56.834 "aliases": [ 00:17:56.834 "efddf00a-a921-4dd3-ad85-175819586ba0" 00:17:56.834 ], 00:17:56.834 "product_name": "Malloc disk", 00:17:56.834 "block_size": 512, 00:17:56.834 "num_blocks": 65536, 00:17:56.834 "uuid": "efddf00a-a921-4dd3-ad85-175819586ba0", 00:17:56.834 "assigned_rate_limits": { 00:17:56.834 "rw_ios_per_sec": 0, 00:17:56.834 "rw_mbytes_per_sec": 0, 00:17:56.834 "r_mbytes_per_sec": 0, 00:17:56.834 "w_mbytes_per_sec": 0 00:17:56.834 }, 00:17:56.834 "claimed": true, 00:17:56.834 "claim_type": "exclusive_write", 00:17:56.834 "zoned": false, 00:17:56.834 "supported_io_types": { 00:17:56.834 "read": true, 00:17:56.834 "write": true, 00:17:56.834 "unmap": true, 00:17:56.834 "flush": true, 00:17:56.834 "reset": true, 00:17:56.834 "nvme_admin": false, 00:17:56.834 "nvme_io": false, 00:17:56.834 "nvme_io_md": false, 00:17:56.834 "write_zeroes": true, 00:17:56.834 "zcopy": true, 00:17:56.834 "get_zone_info": false, 00:17:56.834 "zone_management": false, 00:17:56.834 "zone_append": false, 00:17:56.834 "compare": false, 00:17:56.834 "compare_and_write": false, 00:17:56.834 "abort": true, 00:17:56.834 "seek_hole": false, 00:17:56.834 "seek_data": false, 00:17:56.834 "copy": true, 00:17:56.834 "nvme_iov_md": false 00:17:56.834 }, 00:17:56.834 "memory_domains": [ 00:17:56.834 { 00:17:56.834 "dma_device_id": "system", 00:17:56.834 "dma_device_type": 1 00:17:56.834 }, 00:17:56.834 { 00:17:56.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.834 "dma_device_type": 2 00:17:56.834 } 00:17:56.834 ], 00:17:56.834 "driver_specific": {} 00:17:56.834 } 00:17:56.834 ] 00:17:56.834 13:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:56.834 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:56.834 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:56.834 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:56.834 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:56.834 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:56.834 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:56.834 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:56.835 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:56.835 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.835 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.835 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.835 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.835 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.835 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.094 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.094 "name": "Existed_Raid", 00:17:57.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.094 "strip_size_kb": 64, 00:17:57.094 "state": "configuring", 00:17:57.094 "raid_level": "raid0", 00:17:57.094 "superblock": false, 00:17:57.094 "num_base_bdevs": 4, 00:17:57.094 "num_base_bdevs_discovered": 2, 00:17:57.094 "num_base_bdevs_operational": 4, 00:17:57.094 "base_bdevs_list": [ 00:17:57.094 { 00:17:57.094 "name": "BaseBdev1", 00:17:57.094 "uuid": "44e56dd5-3452-421a-9b26-2a39c17f871f", 00:17:57.094 "is_configured": true, 00:17:57.094 "data_offset": 0, 00:17:57.094 "data_size": 65536 00:17:57.094 }, 00:17:57.094 { 00:17:57.094 "name": "BaseBdev2", 00:17:57.094 "uuid": "efddf00a-a921-4dd3-ad85-175819586ba0", 00:17:57.094 "is_configured": true, 00:17:57.094 "data_offset": 0, 00:17:57.094 "data_size": 65536 00:17:57.094 }, 00:17:57.094 { 00:17:57.094 "name": "BaseBdev3", 00:17:57.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.094 "is_configured": false, 00:17:57.094 "data_offset": 0, 00:17:57.094 "data_size": 0 00:17:57.094 }, 00:17:57.094 { 00:17:57.094 "name": "BaseBdev4", 00:17:57.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.094 "is_configured": false, 00:17:57.094 "data_offset": 0, 00:17:57.094 "data_size": 0 00:17:57.094 } 00:17:57.094 ] 00:17:57.094 }' 00:17:57.094 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.094 13:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.663 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:57.962 [2024-07-25 13:18:08.169711] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:57.962 BaseBdev3 00:17:57.963 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:57.963 13:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:57.963 13:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:57.963 13:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:57.963 13:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:57.963 13:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:57.963 13:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:57.963 13:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:58.222 [ 00:17:58.222 { 00:17:58.222 "name": "BaseBdev3", 00:17:58.222 "aliases": [ 00:17:58.222 "121fad94-919e-4ff1-90b8-45f4244c343f" 00:17:58.222 ], 00:17:58.222 "product_name": "Malloc disk", 00:17:58.222 "block_size": 512, 00:17:58.222 "num_blocks": 65536, 00:17:58.222 "uuid": "121fad94-919e-4ff1-90b8-45f4244c343f", 00:17:58.222 "assigned_rate_limits": { 00:17:58.222 "rw_ios_per_sec": 0, 00:17:58.222 "rw_mbytes_per_sec": 0, 00:17:58.222 "r_mbytes_per_sec": 0, 00:17:58.222 "w_mbytes_per_sec": 0 00:17:58.222 }, 00:17:58.222 "claimed": true, 00:17:58.222 "claim_type": "exclusive_write", 00:17:58.222 "zoned": false, 00:17:58.222 "supported_io_types": { 00:17:58.222 "read": true, 00:17:58.222 "write": true, 00:17:58.222 "unmap": true, 00:17:58.222 "flush": true, 00:17:58.222 "reset": true, 00:17:58.222 "nvme_admin": false, 00:17:58.222 "nvme_io": false, 00:17:58.222 "nvme_io_md": false, 00:17:58.223 "write_zeroes": true, 00:17:58.223 "zcopy": true, 00:17:58.223 "get_zone_info": false, 00:17:58.223 "zone_management": false, 00:17:58.223 "zone_append": false, 00:17:58.223 "compare": false, 00:17:58.223 "compare_and_write": false, 00:17:58.223 "abort": true, 00:17:58.223 "seek_hole": false, 00:17:58.223 "seek_data": false, 00:17:58.223 "copy": true, 00:17:58.223 "nvme_iov_md": false 00:17:58.223 }, 00:17:58.223 "memory_domains": [ 00:17:58.223 { 00:17:58.223 "dma_device_id": "system", 00:17:58.223 "dma_device_type": 1 00:17:58.223 }, 00:17:58.223 { 00:17:58.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.223 "dma_device_type": 2 00:17:58.223 } 00:17:58.223 ], 00:17:58.223 "driver_specific": {} 00:17:58.223 } 00:17:58.223 ] 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.223 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.483 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.483 "name": "Existed_Raid", 00:17:58.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.483 "strip_size_kb": 64, 00:17:58.483 "state": "configuring", 00:17:58.483 "raid_level": "raid0", 00:17:58.483 "superblock": false, 00:17:58.483 "num_base_bdevs": 4, 00:17:58.483 "num_base_bdevs_discovered": 3, 00:17:58.483 "num_base_bdevs_operational": 4, 00:17:58.483 "base_bdevs_list": [ 00:17:58.483 { 00:17:58.483 "name": "BaseBdev1", 00:17:58.483 "uuid": "44e56dd5-3452-421a-9b26-2a39c17f871f", 00:17:58.483 "is_configured": true, 00:17:58.483 "data_offset": 0, 00:17:58.483 "data_size": 65536 00:17:58.483 }, 00:17:58.483 { 00:17:58.483 "name": "BaseBdev2", 00:17:58.483 "uuid": "efddf00a-a921-4dd3-ad85-175819586ba0", 00:17:58.483 "is_configured": true, 00:17:58.483 "data_offset": 0, 00:17:58.483 "data_size": 65536 00:17:58.483 }, 00:17:58.483 { 00:17:58.483 "name": "BaseBdev3", 00:17:58.483 "uuid": "121fad94-919e-4ff1-90b8-45f4244c343f", 00:17:58.483 "is_configured": true, 00:17:58.483 "data_offset": 0, 00:17:58.483 "data_size": 65536 00:17:58.483 }, 00:17:58.483 { 00:17:58.483 "name": "BaseBdev4", 00:17:58.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.483 "is_configured": false, 00:17:58.483 "data_offset": 0, 00:17:58.483 "data_size": 0 00:17:58.483 } 00:17:58.483 ] 00:17:58.483 }' 00:17:58.483 13:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.483 13:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.422 13:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:59.422 [2024-07-25 13:18:09.897372] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:59.422 [2024-07-25 13:18:09.897412] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c3f840 00:17:59.422 [2024-07-25 13:18:09.897420] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:59.422 [2024-07-25 13:18:09.897596] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c3f480 00:17:59.422 [2024-07-25 13:18:09.897707] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c3f840 00:17:59.422 [2024-07-25 13:18:09.897716] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c3f840 00:17:59.422 [2024-07-25 13:18:09.897862] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.422 BaseBdev4 00:17:59.681 13:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:59.681 13:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:59.681 13:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:59.681 13:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:59.681 13:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:59.681 13:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:59.681 13:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.681 13:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:59.941 [ 00:17:59.941 { 00:17:59.941 "name": "BaseBdev4", 00:17:59.941 "aliases": [ 00:17:59.941 "e41b24e3-75b8-49bb-bc14-f46c6b78e583" 00:17:59.941 ], 00:17:59.941 "product_name": "Malloc disk", 00:17:59.941 "block_size": 512, 00:17:59.941 "num_blocks": 65536, 00:17:59.941 "uuid": "e41b24e3-75b8-49bb-bc14-f46c6b78e583", 00:17:59.941 "assigned_rate_limits": { 00:17:59.941 "rw_ios_per_sec": 0, 00:17:59.941 "rw_mbytes_per_sec": 0, 00:17:59.941 "r_mbytes_per_sec": 0, 00:17:59.941 "w_mbytes_per_sec": 0 00:17:59.941 }, 00:17:59.941 "claimed": true, 00:17:59.941 "claim_type": "exclusive_write", 00:17:59.941 "zoned": false, 00:17:59.941 "supported_io_types": { 00:17:59.941 "read": true, 00:17:59.941 "write": true, 00:17:59.941 "unmap": true, 00:17:59.941 "flush": true, 00:17:59.941 "reset": true, 00:17:59.941 "nvme_admin": false, 00:17:59.941 "nvme_io": false, 00:17:59.941 "nvme_io_md": false, 00:17:59.941 "write_zeroes": true, 00:17:59.941 "zcopy": true, 00:17:59.941 "get_zone_info": false, 00:17:59.941 "zone_management": false, 00:17:59.941 "zone_append": false, 00:17:59.941 "compare": false, 00:17:59.941 "compare_and_write": false, 00:17:59.941 "abort": true, 00:17:59.941 "seek_hole": false, 00:17:59.941 "seek_data": false, 00:17:59.941 "copy": true, 00:17:59.941 "nvme_iov_md": false 00:17:59.941 }, 00:17:59.941 "memory_domains": [ 00:17:59.941 { 00:17:59.941 "dma_device_id": "system", 00:17:59.941 "dma_device_type": 1 00:17:59.941 }, 00:17:59.941 { 00:17:59.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.941 "dma_device_type": 2 00:17:59.941 } 00:17:59.941 ], 00:17:59.941 "driver_specific": {} 00:17:59.941 } 00:17:59.941 ] 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.941 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.201 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.201 "name": "Existed_Raid", 00:18:00.201 "uuid": "9fbd615a-e7c6-49e3-8768-3a18ccc79778", 00:18:00.201 "strip_size_kb": 64, 00:18:00.201 "state": "online", 00:18:00.201 "raid_level": "raid0", 00:18:00.201 "superblock": false, 00:18:00.201 "num_base_bdevs": 4, 00:18:00.201 "num_base_bdevs_discovered": 4, 00:18:00.201 "num_base_bdevs_operational": 4, 00:18:00.201 "base_bdevs_list": [ 00:18:00.201 { 00:18:00.201 "name": "BaseBdev1", 00:18:00.201 "uuid": "44e56dd5-3452-421a-9b26-2a39c17f871f", 00:18:00.201 "is_configured": true, 00:18:00.201 "data_offset": 0, 00:18:00.201 "data_size": 65536 00:18:00.201 }, 00:18:00.201 { 00:18:00.201 "name": "BaseBdev2", 00:18:00.201 "uuid": "efddf00a-a921-4dd3-ad85-175819586ba0", 00:18:00.201 "is_configured": true, 00:18:00.201 "data_offset": 0, 00:18:00.201 "data_size": 65536 00:18:00.201 }, 00:18:00.201 { 00:18:00.201 "name": "BaseBdev3", 00:18:00.201 "uuid": "121fad94-919e-4ff1-90b8-45f4244c343f", 00:18:00.201 "is_configured": true, 00:18:00.201 "data_offset": 0, 00:18:00.201 "data_size": 65536 00:18:00.201 }, 00:18:00.201 { 00:18:00.201 "name": "BaseBdev4", 00:18:00.201 "uuid": "e41b24e3-75b8-49bb-bc14-f46c6b78e583", 00:18:00.201 "is_configured": true, 00:18:00.201 "data_offset": 0, 00:18:00.201 "data_size": 65536 00:18:00.201 } 00:18:00.201 ] 00:18:00.201 }' 00:18:00.201 13:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.201 13:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.140 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:01.140 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:01.140 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:01.140 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:01.140 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:01.140 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:01.140 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:01.140 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:01.140 [2024-07-25 13:18:11.622227] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.400 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:01.400 "name": "Existed_Raid", 00:18:01.400 "aliases": [ 00:18:01.400 "9fbd615a-e7c6-49e3-8768-3a18ccc79778" 00:18:01.400 ], 00:18:01.400 "product_name": "Raid Volume", 00:18:01.400 "block_size": 512, 00:18:01.400 "num_blocks": 262144, 00:18:01.400 "uuid": "9fbd615a-e7c6-49e3-8768-3a18ccc79778", 00:18:01.400 "assigned_rate_limits": { 00:18:01.400 "rw_ios_per_sec": 0, 00:18:01.400 "rw_mbytes_per_sec": 0, 00:18:01.400 "r_mbytes_per_sec": 0, 00:18:01.400 "w_mbytes_per_sec": 0 00:18:01.400 }, 00:18:01.400 "claimed": false, 00:18:01.400 "zoned": false, 00:18:01.400 "supported_io_types": { 00:18:01.400 "read": true, 00:18:01.400 "write": true, 00:18:01.400 "unmap": true, 00:18:01.400 "flush": true, 00:18:01.400 "reset": true, 00:18:01.400 "nvme_admin": false, 00:18:01.400 "nvme_io": false, 00:18:01.400 "nvme_io_md": false, 00:18:01.400 "write_zeroes": true, 00:18:01.400 "zcopy": false, 00:18:01.400 "get_zone_info": false, 00:18:01.400 "zone_management": false, 00:18:01.400 "zone_append": false, 00:18:01.400 "compare": false, 00:18:01.400 "compare_and_write": false, 00:18:01.400 "abort": false, 00:18:01.400 "seek_hole": false, 00:18:01.400 "seek_data": false, 00:18:01.400 "copy": false, 00:18:01.400 "nvme_iov_md": false 00:18:01.400 }, 00:18:01.400 "memory_domains": [ 00:18:01.400 { 00:18:01.400 "dma_device_id": "system", 00:18:01.400 "dma_device_type": 1 00:18:01.400 }, 00:18:01.400 { 00:18:01.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.400 "dma_device_type": 2 00:18:01.400 }, 00:18:01.400 { 00:18:01.400 "dma_device_id": "system", 00:18:01.400 "dma_device_type": 1 00:18:01.400 }, 00:18:01.400 { 00:18:01.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.400 "dma_device_type": 2 00:18:01.400 }, 00:18:01.400 { 00:18:01.400 "dma_device_id": "system", 00:18:01.400 "dma_device_type": 1 00:18:01.400 }, 00:18:01.400 { 00:18:01.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.400 "dma_device_type": 2 00:18:01.400 }, 00:18:01.400 { 00:18:01.400 "dma_device_id": "system", 00:18:01.400 "dma_device_type": 1 00:18:01.400 }, 00:18:01.400 { 00:18:01.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.400 "dma_device_type": 2 00:18:01.400 } 00:18:01.400 ], 00:18:01.400 "driver_specific": { 00:18:01.400 "raid": { 00:18:01.400 "uuid": "9fbd615a-e7c6-49e3-8768-3a18ccc79778", 00:18:01.400 "strip_size_kb": 64, 00:18:01.400 "state": "online", 00:18:01.400 "raid_level": "raid0", 00:18:01.400 "superblock": false, 00:18:01.400 "num_base_bdevs": 4, 00:18:01.400 "num_base_bdevs_discovered": 4, 00:18:01.400 "num_base_bdevs_operational": 4, 00:18:01.400 "base_bdevs_list": [ 00:18:01.400 { 00:18:01.400 "name": "BaseBdev1", 00:18:01.400 "uuid": "44e56dd5-3452-421a-9b26-2a39c17f871f", 00:18:01.400 "is_configured": true, 00:18:01.400 "data_offset": 0, 00:18:01.400 "data_size": 65536 00:18:01.400 }, 00:18:01.400 { 00:18:01.400 "name": "BaseBdev2", 00:18:01.400 "uuid": "efddf00a-a921-4dd3-ad85-175819586ba0", 00:18:01.400 "is_configured": true, 00:18:01.400 "data_offset": 0, 00:18:01.400 "data_size": 65536 00:18:01.400 }, 00:18:01.400 { 00:18:01.400 "name": "BaseBdev3", 00:18:01.400 "uuid": "121fad94-919e-4ff1-90b8-45f4244c343f", 00:18:01.400 "is_configured": true, 00:18:01.400 "data_offset": 0, 00:18:01.400 "data_size": 65536 00:18:01.400 }, 00:18:01.400 { 00:18:01.400 "name": "BaseBdev4", 00:18:01.400 "uuid": "e41b24e3-75b8-49bb-bc14-f46c6b78e583", 00:18:01.400 "is_configured": true, 00:18:01.400 "data_offset": 0, 00:18:01.400 "data_size": 65536 00:18:01.400 } 00:18:01.400 ] 00:18:01.400 } 00:18:01.400 } 00:18:01.400 }' 00:18:01.400 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.400 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:01.400 BaseBdev2 00:18:01.400 BaseBdev3 00:18:01.400 BaseBdev4' 00:18:01.400 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:01.400 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:01.400 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:01.660 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:01.660 "name": "BaseBdev1", 00:18:01.660 "aliases": [ 00:18:01.660 "44e56dd5-3452-421a-9b26-2a39c17f871f" 00:18:01.660 ], 00:18:01.660 "product_name": "Malloc disk", 00:18:01.660 "block_size": 512, 00:18:01.660 "num_blocks": 65536, 00:18:01.660 "uuid": "44e56dd5-3452-421a-9b26-2a39c17f871f", 00:18:01.660 "assigned_rate_limits": { 00:18:01.660 "rw_ios_per_sec": 0, 00:18:01.660 "rw_mbytes_per_sec": 0, 00:18:01.660 "r_mbytes_per_sec": 0, 00:18:01.660 "w_mbytes_per_sec": 0 00:18:01.660 }, 00:18:01.660 "claimed": true, 00:18:01.660 "claim_type": "exclusive_write", 00:18:01.660 "zoned": false, 00:18:01.660 "supported_io_types": { 00:18:01.660 "read": true, 00:18:01.660 "write": true, 00:18:01.660 "unmap": true, 00:18:01.660 "flush": true, 00:18:01.660 "reset": true, 00:18:01.660 "nvme_admin": false, 00:18:01.660 "nvme_io": false, 00:18:01.660 "nvme_io_md": false, 00:18:01.660 "write_zeroes": true, 00:18:01.660 "zcopy": true, 00:18:01.660 "get_zone_info": false, 00:18:01.660 "zone_management": false, 00:18:01.660 "zone_append": false, 00:18:01.660 "compare": false, 00:18:01.660 "compare_and_write": false, 00:18:01.660 "abort": true, 00:18:01.660 "seek_hole": false, 00:18:01.660 "seek_data": false, 00:18:01.660 "copy": true, 00:18:01.660 "nvme_iov_md": false 00:18:01.660 }, 00:18:01.660 "memory_domains": [ 00:18:01.660 { 00:18:01.660 "dma_device_id": "system", 00:18:01.660 "dma_device_type": 1 00:18:01.660 }, 00:18:01.660 { 00:18:01.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.660 "dma_device_type": 2 00:18:01.660 } 00:18:01.660 ], 00:18:01.660 "driver_specific": {} 00:18:01.660 }' 00:18:01.660 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.660 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.660 13:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:01.660 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.660 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.660 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:01.660 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.660 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.919 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:01.919 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.919 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.919 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:01.919 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:01.919 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:01.919 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:02.179 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:02.179 "name": "BaseBdev2", 00:18:02.179 "aliases": [ 00:18:02.179 "efddf00a-a921-4dd3-ad85-175819586ba0" 00:18:02.179 ], 00:18:02.179 "product_name": "Malloc disk", 00:18:02.179 "block_size": 512, 00:18:02.179 "num_blocks": 65536, 00:18:02.179 "uuid": "efddf00a-a921-4dd3-ad85-175819586ba0", 00:18:02.179 "assigned_rate_limits": { 00:18:02.179 "rw_ios_per_sec": 0, 00:18:02.179 "rw_mbytes_per_sec": 0, 00:18:02.179 "r_mbytes_per_sec": 0, 00:18:02.179 "w_mbytes_per_sec": 0 00:18:02.179 }, 00:18:02.179 "claimed": true, 00:18:02.179 "claim_type": "exclusive_write", 00:18:02.179 "zoned": false, 00:18:02.179 "supported_io_types": { 00:18:02.179 "read": true, 00:18:02.179 "write": true, 00:18:02.179 "unmap": true, 00:18:02.179 "flush": true, 00:18:02.179 "reset": true, 00:18:02.179 "nvme_admin": false, 00:18:02.179 "nvme_io": false, 00:18:02.179 "nvme_io_md": false, 00:18:02.179 "write_zeroes": true, 00:18:02.179 "zcopy": true, 00:18:02.179 "get_zone_info": false, 00:18:02.179 "zone_management": false, 00:18:02.179 "zone_append": false, 00:18:02.179 "compare": false, 00:18:02.179 "compare_and_write": false, 00:18:02.179 "abort": true, 00:18:02.179 "seek_hole": false, 00:18:02.179 "seek_data": false, 00:18:02.179 "copy": true, 00:18:02.179 "nvme_iov_md": false 00:18:02.179 }, 00:18:02.179 "memory_domains": [ 00:18:02.179 { 00:18:02.179 "dma_device_id": "system", 00:18:02.179 "dma_device_type": 1 00:18:02.179 }, 00:18:02.179 { 00:18:02.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.179 "dma_device_type": 2 00:18:02.179 } 00:18:02.179 ], 00:18:02.179 "driver_specific": {} 00:18:02.179 }' 00:18:02.179 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.179 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.179 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:02.179 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.179 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.179 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:02.179 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.439 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.439 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:02.439 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.439 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.439 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:02.439 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:02.439 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:02.439 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:02.699 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:02.699 "name": "BaseBdev3", 00:18:02.699 "aliases": [ 00:18:02.699 "121fad94-919e-4ff1-90b8-45f4244c343f" 00:18:02.699 ], 00:18:02.699 "product_name": "Malloc disk", 00:18:02.699 "block_size": 512, 00:18:02.699 "num_blocks": 65536, 00:18:02.699 "uuid": "121fad94-919e-4ff1-90b8-45f4244c343f", 00:18:02.699 "assigned_rate_limits": { 00:18:02.699 "rw_ios_per_sec": 0, 00:18:02.699 "rw_mbytes_per_sec": 0, 00:18:02.699 "r_mbytes_per_sec": 0, 00:18:02.699 "w_mbytes_per_sec": 0 00:18:02.699 }, 00:18:02.699 "claimed": true, 00:18:02.699 "claim_type": "exclusive_write", 00:18:02.699 "zoned": false, 00:18:02.699 "supported_io_types": { 00:18:02.699 "read": true, 00:18:02.699 "write": true, 00:18:02.699 "unmap": true, 00:18:02.699 "flush": true, 00:18:02.699 "reset": true, 00:18:02.699 "nvme_admin": false, 00:18:02.699 "nvme_io": false, 00:18:02.699 "nvme_io_md": false, 00:18:02.699 "write_zeroes": true, 00:18:02.699 "zcopy": true, 00:18:02.699 "get_zone_info": false, 00:18:02.699 "zone_management": false, 00:18:02.699 "zone_append": false, 00:18:02.699 "compare": false, 00:18:02.699 "compare_and_write": false, 00:18:02.699 "abort": true, 00:18:02.699 "seek_hole": false, 00:18:02.699 "seek_data": false, 00:18:02.699 "copy": true, 00:18:02.699 "nvme_iov_md": false 00:18:02.699 }, 00:18:02.699 "memory_domains": [ 00:18:02.699 { 00:18:02.699 "dma_device_id": "system", 00:18:02.699 "dma_device_type": 1 00:18:02.699 }, 00:18:02.699 { 00:18:02.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.699 "dma_device_type": 2 00:18:02.699 } 00:18:02.699 ], 00:18:02.699 "driver_specific": {} 00:18:02.699 }' 00:18:02.699 13:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.699 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.699 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:02.699 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.699 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.699 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:02.699 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.699 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.959 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:02.959 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.959 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.959 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:02.959 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:02.959 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:02.959 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:02.959 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:02.959 "name": "BaseBdev4", 00:18:02.959 "aliases": [ 00:18:02.959 "e41b24e3-75b8-49bb-bc14-f46c6b78e583" 00:18:02.959 ], 00:18:02.959 "product_name": "Malloc disk", 00:18:02.959 "block_size": 512, 00:18:02.959 "num_blocks": 65536, 00:18:02.959 "uuid": "e41b24e3-75b8-49bb-bc14-f46c6b78e583", 00:18:02.959 "assigned_rate_limits": { 00:18:02.959 "rw_ios_per_sec": 0, 00:18:02.959 "rw_mbytes_per_sec": 0, 00:18:02.959 "r_mbytes_per_sec": 0, 00:18:02.959 "w_mbytes_per_sec": 0 00:18:02.959 }, 00:18:02.959 "claimed": true, 00:18:02.959 "claim_type": "exclusive_write", 00:18:02.959 "zoned": false, 00:18:02.959 "supported_io_types": { 00:18:02.959 "read": true, 00:18:02.959 "write": true, 00:18:02.959 "unmap": true, 00:18:02.959 "flush": true, 00:18:02.959 "reset": true, 00:18:02.959 "nvme_admin": false, 00:18:02.959 "nvme_io": false, 00:18:02.959 "nvme_io_md": false, 00:18:02.959 "write_zeroes": true, 00:18:02.959 "zcopy": true, 00:18:02.959 "get_zone_info": false, 00:18:02.959 "zone_management": false, 00:18:02.959 "zone_append": false, 00:18:02.959 "compare": false, 00:18:02.959 "compare_and_write": false, 00:18:02.959 "abort": true, 00:18:02.959 "seek_hole": false, 00:18:02.959 "seek_data": false, 00:18:02.959 "copy": true, 00:18:02.959 "nvme_iov_md": false 00:18:02.959 }, 00:18:02.959 "memory_domains": [ 00:18:02.959 { 00:18:02.959 "dma_device_id": "system", 00:18:02.959 "dma_device_type": 1 00:18:02.959 }, 00:18:02.959 { 00:18:02.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.959 "dma_device_type": 2 00:18:02.959 } 00:18:02.959 ], 00:18:02.959 "driver_specific": {} 00:18:02.959 }' 00:18:02.959 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.218 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.218 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:03.218 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.218 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.218 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:03.218 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.218 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.477 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:03.477 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.477 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.477 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:03.477 13:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:03.736 [2024-07-25 13:18:14.028320] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.736 [2024-07-25 13:18:14.028343] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.736 [2024-07-25 13:18:14.028383] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.736 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:03.736 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:03.736 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:03.736 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:03.736 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:03.736 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:03.736 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:03.736 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:03.736 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:03.736 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:03.736 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:03.737 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.737 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.737 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.737 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.737 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.737 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.997 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.997 "name": "Existed_Raid", 00:18:03.997 "uuid": "9fbd615a-e7c6-49e3-8768-3a18ccc79778", 00:18:03.997 "strip_size_kb": 64, 00:18:03.997 "state": "offline", 00:18:03.997 "raid_level": "raid0", 00:18:03.997 "superblock": false, 00:18:03.997 "num_base_bdevs": 4, 00:18:03.997 "num_base_bdevs_discovered": 3, 00:18:03.997 "num_base_bdevs_operational": 3, 00:18:03.997 "base_bdevs_list": [ 00:18:03.997 { 00:18:03.997 "name": null, 00:18:03.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.997 "is_configured": false, 00:18:03.997 "data_offset": 0, 00:18:03.997 "data_size": 65536 00:18:03.997 }, 00:18:03.997 { 00:18:03.997 "name": "BaseBdev2", 00:18:03.997 "uuid": "efddf00a-a921-4dd3-ad85-175819586ba0", 00:18:03.997 "is_configured": true, 00:18:03.997 "data_offset": 0, 00:18:03.997 "data_size": 65536 00:18:03.997 }, 00:18:03.997 { 00:18:03.997 "name": "BaseBdev3", 00:18:03.997 "uuid": "121fad94-919e-4ff1-90b8-45f4244c343f", 00:18:03.997 "is_configured": true, 00:18:03.997 "data_offset": 0, 00:18:03.997 "data_size": 65536 00:18:03.997 }, 00:18:03.997 { 00:18:03.997 "name": "BaseBdev4", 00:18:03.997 "uuid": "e41b24e3-75b8-49bb-bc14-f46c6b78e583", 00:18:03.997 "is_configured": true, 00:18:03.997 "data_offset": 0, 00:18:03.997 "data_size": 65536 00:18:03.997 } 00:18:03.997 ] 00:18:03.997 }' 00:18:03.997 13:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.997 13:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.935 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:04.935 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:04.935 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.935 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:04.935 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:04.935 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:04.935 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:05.196 [2024-07-25 13:18:15.569310] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:05.196 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:05.196 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:05.196 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.196 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:05.456 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:05.456 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:05.456 13:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:05.716 [2024-07-25 13:18:16.040471] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:05.717 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:05.717 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:05.717 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:05.717 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.977 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:05.977 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:05.977 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:06.236 [2024-07-25 13:18:16.511770] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:06.236 [2024-07-25 13:18:16.511807] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c3f840 name Existed_Raid, state offline 00:18:06.237 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:06.237 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:06.237 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.237 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:06.496 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:06.496 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:06.496 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:18:06.496 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:06.496 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:06.496 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:06.756 BaseBdev2 00:18:06.756 13:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:06.756 13:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:06.756 13:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:06.756 13:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:06.756 13:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:06.756 13:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:06.756 13:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.756 13:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:07.016 [ 00:18:07.016 { 00:18:07.016 "name": "BaseBdev2", 00:18:07.016 "aliases": [ 00:18:07.016 "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e" 00:18:07.016 ], 00:18:07.016 "product_name": "Malloc disk", 00:18:07.016 "block_size": 512, 00:18:07.016 "num_blocks": 65536, 00:18:07.016 "uuid": "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e", 00:18:07.016 "assigned_rate_limits": { 00:18:07.016 "rw_ios_per_sec": 0, 00:18:07.016 "rw_mbytes_per_sec": 0, 00:18:07.016 "r_mbytes_per_sec": 0, 00:18:07.016 "w_mbytes_per_sec": 0 00:18:07.016 }, 00:18:07.016 "claimed": false, 00:18:07.016 "zoned": false, 00:18:07.016 "supported_io_types": { 00:18:07.016 "read": true, 00:18:07.016 "write": true, 00:18:07.016 "unmap": true, 00:18:07.016 "flush": true, 00:18:07.016 "reset": true, 00:18:07.016 "nvme_admin": false, 00:18:07.016 "nvme_io": false, 00:18:07.016 "nvme_io_md": false, 00:18:07.016 "write_zeroes": true, 00:18:07.016 "zcopy": true, 00:18:07.016 "get_zone_info": false, 00:18:07.016 "zone_management": false, 00:18:07.016 "zone_append": false, 00:18:07.016 "compare": false, 00:18:07.016 "compare_and_write": false, 00:18:07.016 "abort": true, 00:18:07.016 "seek_hole": false, 00:18:07.016 "seek_data": false, 00:18:07.016 "copy": true, 00:18:07.016 "nvme_iov_md": false 00:18:07.016 }, 00:18:07.016 "memory_domains": [ 00:18:07.016 { 00:18:07.016 "dma_device_id": "system", 00:18:07.016 "dma_device_type": 1 00:18:07.016 }, 00:18:07.016 { 00:18:07.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.016 "dma_device_type": 2 00:18:07.016 } 00:18:07.016 ], 00:18:07.016 "driver_specific": {} 00:18:07.016 } 00:18:07.016 ] 00:18:07.016 13:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:07.016 13:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:07.016 13:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:07.016 13:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:07.276 BaseBdev3 00:18:07.276 13:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:07.276 13:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:07.276 13:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:07.276 13:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:07.276 13:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:07.276 13:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:07.276 13:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:07.535 13:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:07.796 [ 00:18:07.796 { 00:18:07.796 "name": "BaseBdev3", 00:18:07.796 "aliases": [ 00:18:07.796 "7c4f534c-e392-42ca-939d-642dc67d89a6" 00:18:07.796 ], 00:18:07.796 "product_name": "Malloc disk", 00:18:07.796 "block_size": 512, 00:18:07.796 "num_blocks": 65536, 00:18:07.796 "uuid": "7c4f534c-e392-42ca-939d-642dc67d89a6", 00:18:07.796 "assigned_rate_limits": { 00:18:07.796 "rw_ios_per_sec": 0, 00:18:07.796 "rw_mbytes_per_sec": 0, 00:18:07.796 "r_mbytes_per_sec": 0, 00:18:07.796 "w_mbytes_per_sec": 0 00:18:07.796 }, 00:18:07.796 "claimed": false, 00:18:07.796 "zoned": false, 00:18:07.796 "supported_io_types": { 00:18:07.796 "read": true, 00:18:07.796 "write": true, 00:18:07.796 "unmap": true, 00:18:07.796 "flush": true, 00:18:07.796 "reset": true, 00:18:07.796 "nvme_admin": false, 00:18:07.796 "nvme_io": false, 00:18:07.796 "nvme_io_md": false, 00:18:07.796 "write_zeroes": true, 00:18:07.796 "zcopy": true, 00:18:07.796 "get_zone_info": false, 00:18:07.796 "zone_management": false, 00:18:07.796 "zone_append": false, 00:18:07.796 "compare": false, 00:18:07.796 "compare_and_write": false, 00:18:07.796 "abort": true, 00:18:07.796 "seek_hole": false, 00:18:07.796 "seek_data": false, 00:18:07.796 "copy": true, 00:18:07.796 "nvme_iov_md": false 00:18:07.796 }, 00:18:07.796 "memory_domains": [ 00:18:07.796 { 00:18:07.796 "dma_device_id": "system", 00:18:07.796 "dma_device_type": 1 00:18:07.796 }, 00:18:07.796 { 00:18:07.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.796 "dma_device_type": 2 00:18:07.796 } 00:18:07.796 ], 00:18:07.796 "driver_specific": {} 00:18:07.796 } 00:18:07.796 ] 00:18:07.796 13:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:07.796 13:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:07.796 13:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:07.796 13:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:08.056 BaseBdev4 00:18:08.056 13:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:18:08.056 13:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:08.056 13:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:08.056 13:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:08.056 13:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:08.056 13:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:08.056 13:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.316 13:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:08.316 [ 00:18:08.316 { 00:18:08.316 "name": "BaseBdev4", 00:18:08.316 "aliases": [ 00:18:08.316 "f45b4125-48c3-4485-a020-1400b7596bd0" 00:18:08.316 ], 00:18:08.316 "product_name": "Malloc disk", 00:18:08.316 "block_size": 512, 00:18:08.316 "num_blocks": 65536, 00:18:08.316 "uuid": "f45b4125-48c3-4485-a020-1400b7596bd0", 00:18:08.316 "assigned_rate_limits": { 00:18:08.316 "rw_ios_per_sec": 0, 00:18:08.316 "rw_mbytes_per_sec": 0, 00:18:08.316 "r_mbytes_per_sec": 0, 00:18:08.316 "w_mbytes_per_sec": 0 00:18:08.316 }, 00:18:08.316 "claimed": false, 00:18:08.316 "zoned": false, 00:18:08.316 "supported_io_types": { 00:18:08.316 "read": true, 00:18:08.316 "write": true, 00:18:08.316 "unmap": true, 00:18:08.316 "flush": true, 00:18:08.316 "reset": true, 00:18:08.316 "nvme_admin": false, 00:18:08.316 "nvme_io": false, 00:18:08.316 "nvme_io_md": false, 00:18:08.316 "write_zeroes": true, 00:18:08.316 "zcopy": true, 00:18:08.316 "get_zone_info": false, 00:18:08.316 "zone_management": false, 00:18:08.316 "zone_append": false, 00:18:08.316 "compare": false, 00:18:08.316 "compare_and_write": false, 00:18:08.316 "abort": true, 00:18:08.316 "seek_hole": false, 00:18:08.316 "seek_data": false, 00:18:08.316 "copy": true, 00:18:08.316 "nvme_iov_md": false 00:18:08.316 }, 00:18:08.316 "memory_domains": [ 00:18:08.316 { 00:18:08.316 "dma_device_id": "system", 00:18:08.316 "dma_device_type": 1 00:18:08.316 }, 00:18:08.316 { 00:18:08.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.316 "dma_device_type": 2 00:18:08.316 } 00:18:08.316 ], 00:18:08.316 "driver_specific": {} 00:18:08.316 } 00:18:08.316 ] 00:18:08.316 13:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:08.316 13:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:08.316 13:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:08.316 13:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:08.576 [2024-07-25 13:18:18.989095] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:08.576 [2024-07-25 13:18:18.989131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:08.576 [2024-07-25 13:18:18.989155] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:08.576 [2024-07-25 13:18:18.990381] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:08.576 [2024-07-25 13:18:18.990419] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:08.576 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:08.576 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:08.576 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:08.576 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:08.576 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:08.576 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:08.576 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.576 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.576 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.576 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.576 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.576 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.835 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.836 "name": "Existed_Raid", 00:18:08.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.836 "strip_size_kb": 64, 00:18:08.836 "state": "configuring", 00:18:08.836 "raid_level": "raid0", 00:18:08.836 "superblock": false, 00:18:08.836 "num_base_bdevs": 4, 00:18:08.836 "num_base_bdevs_discovered": 3, 00:18:08.836 "num_base_bdevs_operational": 4, 00:18:08.836 "base_bdevs_list": [ 00:18:08.836 { 00:18:08.836 "name": "BaseBdev1", 00:18:08.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.836 "is_configured": false, 00:18:08.836 "data_offset": 0, 00:18:08.836 "data_size": 0 00:18:08.836 }, 00:18:08.836 { 00:18:08.836 "name": "BaseBdev2", 00:18:08.836 "uuid": "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e", 00:18:08.836 "is_configured": true, 00:18:08.836 "data_offset": 0, 00:18:08.836 "data_size": 65536 00:18:08.836 }, 00:18:08.836 { 00:18:08.836 "name": "BaseBdev3", 00:18:08.836 "uuid": "7c4f534c-e392-42ca-939d-642dc67d89a6", 00:18:08.836 "is_configured": true, 00:18:08.836 "data_offset": 0, 00:18:08.836 "data_size": 65536 00:18:08.836 }, 00:18:08.836 { 00:18:08.836 "name": "BaseBdev4", 00:18:08.836 "uuid": "f45b4125-48c3-4485-a020-1400b7596bd0", 00:18:08.836 "is_configured": true, 00:18:08.836 "data_offset": 0, 00:18:08.836 "data_size": 65536 00:18:08.836 } 00:18:08.836 ] 00:18:08.836 }' 00:18:08.836 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.836 13:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.405 13:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:09.665 [2024-07-25 13:18:20.027809] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:09.665 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:09.665 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:09.665 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.665 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:09.665 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:09.665 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:09.665 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.665 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.665 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.665 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.665 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.665 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.925 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.925 "name": "Existed_Raid", 00:18:09.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.925 "strip_size_kb": 64, 00:18:09.925 "state": "configuring", 00:18:09.925 "raid_level": "raid0", 00:18:09.925 "superblock": false, 00:18:09.925 "num_base_bdevs": 4, 00:18:09.925 "num_base_bdevs_discovered": 2, 00:18:09.925 "num_base_bdevs_operational": 4, 00:18:09.925 "base_bdevs_list": [ 00:18:09.925 { 00:18:09.925 "name": "BaseBdev1", 00:18:09.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.925 "is_configured": false, 00:18:09.925 "data_offset": 0, 00:18:09.925 "data_size": 0 00:18:09.925 }, 00:18:09.925 { 00:18:09.925 "name": null, 00:18:09.925 "uuid": "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e", 00:18:09.925 "is_configured": false, 00:18:09.925 "data_offset": 0, 00:18:09.925 "data_size": 65536 00:18:09.925 }, 00:18:09.925 { 00:18:09.925 "name": "BaseBdev3", 00:18:09.925 "uuid": "7c4f534c-e392-42ca-939d-642dc67d89a6", 00:18:09.925 "is_configured": true, 00:18:09.925 "data_offset": 0, 00:18:09.925 "data_size": 65536 00:18:09.925 }, 00:18:09.925 { 00:18:09.925 "name": "BaseBdev4", 00:18:09.925 "uuid": "f45b4125-48c3-4485-a020-1400b7596bd0", 00:18:09.925 "is_configured": true, 00:18:09.925 "data_offset": 0, 00:18:09.925 "data_size": 65536 00:18:09.925 } 00:18:09.925 ] 00:18:09.925 }' 00:18:09.925 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.925 13:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.495 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.495 13:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:10.754 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:10.754 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:11.016 [2024-07-25 13:18:21.278194] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:11.016 BaseBdev1 00:18:11.016 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:11.016 13:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:11.016 13:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:11.016 13:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:11.016 13:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:11.016 13:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:11.016 13:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:11.356 13:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:11.356 [ 00:18:11.356 { 00:18:11.356 "name": "BaseBdev1", 00:18:11.356 "aliases": [ 00:18:11.356 "e9e3892e-1fad-468b-9aa8-e40059cebf6e" 00:18:11.356 ], 00:18:11.356 "product_name": "Malloc disk", 00:18:11.356 "block_size": 512, 00:18:11.356 "num_blocks": 65536, 00:18:11.356 "uuid": "e9e3892e-1fad-468b-9aa8-e40059cebf6e", 00:18:11.356 "assigned_rate_limits": { 00:18:11.356 "rw_ios_per_sec": 0, 00:18:11.356 "rw_mbytes_per_sec": 0, 00:18:11.356 "r_mbytes_per_sec": 0, 00:18:11.356 "w_mbytes_per_sec": 0 00:18:11.356 }, 00:18:11.356 "claimed": true, 00:18:11.356 "claim_type": "exclusive_write", 00:18:11.356 "zoned": false, 00:18:11.356 "supported_io_types": { 00:18:11.356 "read": true, 00:18:11.356 "write": true, 00:18:11.356 "unmap": true, 00:18:11.356 "flush": true, 00:18:11.356 "reset": true, 00:18:11.356 "nvme_admin": false, 00:18:11.356 "nvme_io": false, 00:18:11.356 "nvme_io_md": false, 00:18:11.356 "write_zeroes": true, 00:18:11.356 "zcopy": true, 00:18:11.356 "get_zone_info": false, 00:18:11.356 "zone_management": false, 00:18:11.356 "zone_append": false, 00:18:11.356 "compare": false, 00:18:11.356 "compare_and_write": false, 00:18:11.356 "abort": true, 00:18:11.357 "seek_hole": false, 00:18:11.357 "seek_data": false, 00:18:11.357 "copy": true, 00:18:11.357 "nvme_iov_md": false 00:18:11.357 }, 00:18:11.357 "memory_domains": [ 00:18:11.357 { 00:18:11.357 "dma_device_id": "system", 00:18:11.357 "dma_device_type": 1 00:18:11.357 }, 00:18:11.357 { 00:18:11.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.357 "dma_device_type": 2 00:18:11.357 } 00:18:11.357 ], 00:18:11.357 "driver_specific": {} 00:18:11.357 } 00:18:11.357 ] 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.357 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.616 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.616 "name": "Existed_Raid", 00:18:11.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.616 "strip_size_kb": 64, 00:18:11.616 "state": "configuring", 00:18:11.616 "raid_level": "raid0", 00:18:11.616 "superblock": false, 00:18:11.616 "num_base_bdevs": 4, 00:18:11.616 "num_base_bdevs_discovered": 3, 00:18:11.616 "num_base_bdevs_operational": 4, 00:18:11.616 "base_bdevs_list": [ 00:18:11.616 { 00:18:11.616 "name": "BaseBdev1", 00:18:11.616 "uuid": "e9e3892e-1fad-468b-9aa8-e40059cebf6e", 00:18:11.616 "is_configured": true, 00:18:11.616 "data_offset": 0, 00:18:11.616 "data_size": 65536 00:18:11.616 }, 00:18:11.616 { 00:18:11.616 "name": null, 00:18:11.616 "uuid": "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e", 00:18:11.616 "is_configured": false, 00:18:11.616 "data_offset": 0, 00:18:11.616 "data_size": 65536 00:18:11.616 }, 00:18:11.616 { 00:18:11.616 "name": "BaseBdev3", 00:18:11.616 "uuid": "7c4f534c-e392-42ca-939d-642dc67d89a6", 00:18:11.616 "is_configured": true, 00:18:11.616 "data_offset": 0, 00:18:11.616 "data_size": 65536 00:18:11.616 }, 00:18:11.616 { 00:18:11.616 "name": "BaseBdev4", 00:18:11.616 "uuid": "f45b4125-48c3-4485-a020-1400b7596bd0", 00:18:11.616 "is_configured": true, 00:18:11.616 "data_offset": 0, 00:18:11.616 "data_size": 65536 00:18:11.616 } 00:18:11.617 ] 00:18:11.617 }' 00:18:11.617 13:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.617 13:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.185 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.185 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:12.445 [2024-07-25 13:18:22.898505] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.445 13:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.016 13:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.016 "name": "Existed_Raid", 00:18:13.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.016 "strip_size_kb": 64, 00:18:13.016 "state": "configuring", 00:18:13.016 "raid_level": "raid0", 00:18:13.016 "superblock": false, 00:18:13.016 "num_base_bdevs": 4, 00:18:13.016 "num_base_bdevs_discovered": 2, 00:18:13.016 "num_base_bdevs_operational": 4, 00:18:13.016 "base_bdevs_list": [ 00:18:13.016 { 00:18:13.016 "name": "BaseBdev1", 00:18:13.016 "uuid": "e9e3892e-1fad-468b-9aa8-e40059cebf6e", 00:18:13.016 "is_configured": true, 00:18:13.016 "data_offset": 0, 00:18:13.016 "data_size": 65536 00:18:13.016 }, 00:18:13.016 { 00:18:13.016 "name": null, 00:18:13.016 "uuid": "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e", 00:18:13.016 "is_configured": false, 00:18:13.016 "data_offset": 0, 00:18:13.016 "data_size": 65536 00:18:13.016 }, 00:18:13.016 { 00:18:13.016 "name": null, 00:18:13.016 "uuid": "7c4f534c-e392-42ca-939d-642dc67d89a6", 00:18:13.016 "is_configured": false, 00:18:13.016 "data_offset": 0, 00:18:13.016 "data_size": 65536 00:18:13.016 }, 00:18:13.016 { 00:18:13.016 "name": "BaseBdev4", 00:18:13.016 "uuid": "f45b4125-48c3-4485-a020-1400b7596bd0", 00:18:13.016 "is_configured": true, 00:18:13.016 "data_offset": 0, 00:18:13.016 "data_size": 65536 00:18:13.016 } 00:18:13.016 ] 00:18:13.016 }' 00:18:13.016 13:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.016 13:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.584 13:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.584 13:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:13.843 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:13.843 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:14.102 [2024-07-25 13:18:24.426539] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:14.102 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:14.103 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:14.103 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:14.103 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:14.103 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:14.103 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:14.103 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.103 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.103 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.103 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.103 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.103 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.362 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.362 "name": "Existed_Raid", 00:18:14.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.362 "strip_size_kb": 64, 00:18:14.362 "state": "configuring", 00:18:14.362 "raid_level": "raid0", 00:18:14.362 "superblock": false, 00:18:14.362 "num_base_bdevs": 4, 00:18:14.362 "num_base_bdevs_discovered": 3, 00:18:14.362 "num_base_bdevs_operational": 4, 00:18:14.362 "base_bdevs_list": [ 00:18:14.362 { 00:18:14.362 "name": "BaseBdev1", 00:18:14.362 "uuid": "e9e3892e-1fad-468b-9aa8-e40059cebf6e", 00:18:14.362 "is_configured": true, 00:18:14.362 "data_offset": 0, 00:18:14.362 "data_size": 65536 00:18:14.362 }, 00:18:14.362 { 00:18:14.362 "name": null, 00:18:14.362 "uuid": "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e", 00:18:14.362 "is_configured": false, 00:18:14.362 "data_offset": 0, 00:18:14.362 "data_size": 65536 00:18:14.362 }, 00:18:14.362 { 00:18:14.362 "name": "BaseBdev3", 00:18:14.362 "uuid": "7c4f534c-e392-42ca-939d-642dc67d89a6", 00:18:14.362 "is_configured": true, 00:18:14.362 "data_offset": 0, 00:18:14.362 "data_size": 65536 00:18:14.362 }, 00:18:14.362 { 00:18:14.362 "name": "BaseBdev4", 00:18:14.362 "uuid": "f45b4125-48c3-4485-a020-1400b7596bd0", 00:18:14.362 "is_configured": true, 00:18:14.362 "data_offset": 0, 00:18:14.362 "data_size": 65536 00:18:14.362 } 00:18:14.362 ] 00:18:14.362 }' 00:18:14.362 13:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.362 13:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.930 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.930 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:15.190 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:15.190 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:15.450 [2024-07-25 13:18:25.705909] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:15.450 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:15.450 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:15.450 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:15.450 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:15.450 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:15.450 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:15.450 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.450 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.450 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.450 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.450 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.450 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.710 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.710 "name": "Existed_Raid", 00:18:15.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.710 "strip_size_kb": 64, 00:18:15.710 "state": "configuring", 00:18:15.710 "raid_level": "raid0", 00:18:15.710 "superblock": false, 00:18:15.710 "num_base_bdevs": 4, 00:18:15.710 "num_base_bdevs_discovered": 2, 00:18:15.710 "num_base_bdevs_operational": 4, 00:18:15.710 "base_bdevs_list": [ 00:18:15.710 { 00:18:15.710 "name": null, 00:18:15.710 "uuid": "e9e3892e-1fad-468b-9aa8-e40059cebf6e", 00:18:15.710 "is_configured": false, 00:18:15.710 "data_offset": 0, 00:18:15.710 "data_size": 65536 00:18:15.710 }, 00:18:15.710 { 00:18:15.710 "name": null, 00:18:15.710 "uuid": "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e", 00:18:15.710 "is_configured": false, 00:18:15.710 "data_offset": 0, 00:18:15.710 "data_size": 65536 00:18:15.710 }, 00:18:15.710 { 00:18:15.710 "name": "BaseBdev3", 00:18:15.710 "uuid": "7c4f534c-e392-42ca-939d-642dc67d89a6", 00:18:15.710 "is_configured": true, 00:18:15.710 "data_offset": 0, 00:18:15.710 "data_size": 65536 00:18:15.710 }, 00:18:15.710 { 00:18:15.710 "name": "BaseBdev4", 00:18:15.710 "uuid": "f45b4125-48c3-4485-a020-1400b7596bd0", 00:18:15.710 "is_configured": true, 00:18:15.710 "data_offset": 0, 00:18:15.710 "data_size": 65536 00:18:15.710 } 00:18:15.710 ] 00:18:15.710 }' 00:18:15.710 13:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.710 13:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.277 13:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:16.277 13:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.536 13:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:16.536 13:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:16.536 [2024-07-25 13:18:26.983239] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.536 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:16.536 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:16.536 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:16.536 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:16.536 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:16.536 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:16.536 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.536 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.536 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.536 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.536 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.536 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.795 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.795 "name": "Existed_Raid", 00:18:16.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.795 "strip_size_kb": 64, 00:18:16.795 "state": "configuring", 00:18:16.795 "raid_level": "raid0", 00:18:16.795 "superblock": false, 00:18:16.795 "num_base_bdevs": 4, 00:18:16.795 "num_base_bdevs_discovered": 3, 00:18:16.795 "num_base_bdevs_operational": 4, 00:18:16.795 "base_bdevs_list": [ 00:18:16.795 { 00:18:16.795 "name": null, 00:18:16.795 "uuid": "e9e3892e-1fad-468b-9aa8-e40059cebf6e", 00:18:16.795 "is_configured": false, 00:18:16.795 "data_offset": 0, 00:18:16.795 "data_size": 65536 00:18:16.795 }, 00:18:16.795 { 00:18:16.795 "name": "BaseBdev2", 00:18:16.795 "uuid": "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e", 00:18:16.795 "is_configured": true, 00:18:16.795 "data_offset": 0, 00:18:16.795 "data_size": 65536 00:18:16.795 }, 00:18:16.795 { 00:18:16.796 "name": "BaseBdev3", 00:18:16.796 "uuid": "7c4f534c-e392-42ca-939d-642dc67d89a6", 00:18:16.796 "is_configured": true, 00:18:16.796 "data_offset": 0, 00:18:16.796 "data_size": 65536 00:18:16.796 }, 00:18:16.796 { 00:18:16.796 "name": "BaseBdev4", 00:18:16.796 "uuid": "f45b4125-48c3-4485-a020-1400b7596bd0", 00:18:16.796 "is_configured": true, 00:18:16.796 "data_offset": 0, 00:18:16.796 "data_size": 65536 00:18:16.796 } 00:18:16.796 ] 00:18:16.796 }' 00:18:16.796 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.796 13:18:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.364 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.364 13:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:17.623 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:17.623 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.623 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:17.882 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e9e3892e-1fad-468b-9aa8-e40059cebf6e 00:18:18.142 [2024-07-25 13:18:28.486241] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:18.142 [2024-07-25 13:18:28.486275] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c3dcb0 00:18:18.142 [2024-07-25 13:18:28.486283] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:18.142 [2024-07-25 13:18:28.486455] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1de7f30 00:18:18.142 [2024-07-25 13:18:28.486557] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c3dcb0 00:18:18.142 [2024-07-25 13:18:28.486566] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c3dcb0 00:18:18.142 [2024-07-25 13:18:28.486706] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.142 NewBaseBdev 00:18:18.142 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:18.142 13:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:18.142 13:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:18.142 13:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:18.142 13:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:18.142 13:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:18.142 13:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.446 13:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:18.707 [ 00:18:18.707 { 00:18:18.707 "name": "NewBaseBdev", 00:18:18.707 "aliases": [ 00:18:18.707 "e9e3892e-1fad-468b-9aa8-e40059cebf6e" 00:18:18.707 ], 00:18:18.707 "product_name": "Malloc disk", 00:18:18.707 "block_size": 512, 00:18:18.707 "num_blocks": 65536, 00:18:18.707 "uuid": "e9e3892e-1fad-468b-9aa8-e40059cebf6e", 00:18:18.707 "assigned_rate_limits": { 00:18:18.707 "rw_ios_per_sec": 0, 00:18:18.707 "rw_mbytes_per_sec": 0, 00:18:18.707 "r_mbytes_per_sec": 0, 00:18:18.707 "w_mbytes_per_sec": 0 00:18:18.707 }, 00:18:18.707 "claimed": true, 00:18:18.707 "claim_type": "exclusive_write", 00:18:18.707 "zoned": false, 00:18:18.707 "supported_io_types": { 00:18:18.707 "read": true, 00:18:18.707 "write": true, 00:18:18.707 "unmap": true, 00:18:18.707 "flush": true, 00:18:18.707 "reset": true, 00:18:18.707 "nvme_admin": false, 00:18:18.707 "nvme_io": false, 00:18:18.707 "nvme_io_md": false, 00:18:18.707 "write_zeroes": true, 00:18:18.707 "zcopy": true, 00:18:18.707 "get_zone_info": false, 00:18:18.707 "zone_management": false, 00:18:18.707 "zone_append": false, 00:18:18.707 "compare": false, 00:18:18.707 "compare_and_write": false, 00:18:18.707 "abort": true, 00:18:18.707 "seek_hole": false, 00:18:18.707 "seek_data": false, 00:18:18.707 "copy": true, 00:18:18.707 "nvme_iov_md": false 00:18:18.707 }, 00:18:18.707 "memory_domains": [ 00:18:18.707 { 00:18:18.707 "dma_device_id": "system", 00:18:18.707 "dma_device_type": 1 00:18:18.707 }, 00:18:18.707 { 00:18:18.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.707 "dma_device_type": 2 00:18:18.707 } 00:18:18.707 ], 00:18:18.707 "driver_specific": {} 00:18:18.707 } 00:18:18.707 ] 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.707 13:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.707 13:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.707 "name": "Existed_Raid", 00:18:18.707 "uuid": "1b2ac832-55bf-4e95-8b2e-827d098e0a21", 00:18:18.707 "strip_size_kb": 64, 00:18:18.707 "state": "online", 00:18:18.707 "raid_level": "raid0", 00:18:18.707 "superblock": false, 00:18:18.707 "num_base_bdevs": 4, 00:18:18.707 "num_base_bdevs_discovered": 4, 00:18:18.707 "num_base_bdevs_operational": 4, 00:18:18.707 "base_bdevs_list": [ 00:18:18.707 { 00:18:18.707 "name": "NewBaseBdev", 00:18:18.707 "uuid": "e9e3892e-1fad-468b-9aa8-e40059cebf6e", 00:18:18.707 "is_configured": true, 00:18:18.707 "data_offset": 0, 00:18:18.707 "data_size": 65536 00:18:18.707 }, 00:18:18.708 { 00:18:18.708 "name": "BaseBdev2", 00:18:18.708 "uuid": "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e", 00:18:18.708 "is_configured": true, 00:18:18.708 "data_offset": 0, 00:18:18.708 "data_size": 65536 00:18:18.708 }, 00:18:18.708 { 00:18:18.708 "name": "BaseBdev3", 00:18:18.708 "uuid": "7c4f534c-e392-42ca-939d-642dc67d89a6", 00:18:18.708 "is_configured": true, 00:18:18.708 "data_offset": 0, 00:18:18.708 "data_size": 65536 00:18:18.708 }, 00:18:18.708 { 00:18:18.708 "name": "BaseBdev4", 00:18:18.708 "uuid": "f45b4125-48c3-4485-a020-1400b7596bd0", 00:18:18.708 "is_configured": true, 00:18:18.708 "data_offset": 0, 00:18:18.708 "data_size": 65536 00:18:18.708 } 00:18:18.708 ] 00:18:18.708 }' 00:18:18.708 13:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.708 13:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.646 13:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:19.646 13:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:19.646 13:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:19.646 13:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:19.646 13:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:19.646 13:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:19.646 13:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:19.646 13:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:19.646 [2024-07-25 13:18:29.978479] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.646 13:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:19.646 "name": "Existed_Raid", 00:18:19.646 "aliases": [ 00:18:19.646 "1b2ac832-55bf-4e95-8b2e-827d098e0a21" 00:18:19.646 ], 00:18:19.646 "product_name": "Raid Volume", 00:18:19.646 "block_size": 512, 00:18:19.646 "num_blocks": 262144, 00:18:19.646 "uuid": "1b2ac832-55bf-4e95-8b2e-827d098e0a21", 00:18:19.646 "assigned_rate_limits": { 00:18:19.646 "rw_ios_per_sec": 0, 00:18:19.646 "rw_mbytes_per_sec": 0, 00:18:19.646 "r_mbytes_per_sec": 0, 00:18:19.646 "w_mbytes_per_sec": 0 00:18:19.646 }, 00:18:19.646 "claimed": false, 00:18:19.646 "zoned": false, 00:18:19.646 "supported_io_types": { 00:18:19.646 "read": true, 00:18:19.646 "write": true, 00:18:19.646 "unmap": true, 00:18:19.646 "flush": true, 00:18:19.646 "reset": true, 00:18:19.646 "nvme_admin": false, 00:18:19.646 "nvme_io": false, 00:18:19.646 "nvme_io_md": false, 00:18:19.646 "write_zeroes": true, 00:18:19.646 "zcopy": false, 00:18:19.646 "get_zone_info": false, 00:18:19.646 "zone_management": false, 00:18:19.646 "zone_append": false, 00:18:19.646 "compare": false, 00:18:19.646 "compare_and_write": false, 00:18:19.646 "abort": false, 00:18:19.646 "seek_hole": false, 00:18:19.646 "seek_data": false, 00:18:19.646 "copy": false, 00:18:19.646 "nvme_iov_md": false 00:18:19.646 }, 00:18:19.646 "memory_domains": [ 00:18:19.646 { 00:18:19.646 "dma_device_id": "system", 00:18:19.646 "dma_device_type": 1 00:18:19.646 }, 00:18:19.646 { 00:18:19.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.646 "dma_device_type": 2 00:18:19.646 }, 00:18:19.646 { 00:18:19.646 "dma_device_id": "system", 00:18:19.646 "dma_device_type": 1 00:18:19.646 }, 00:18:19.646 { 00:18:19.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.646 "dma_device_type": 2 00:18:19.646 }, 00:18:19.646 { 00:18:19.646 "dma_device_id": "system", 00:18:19.646 "dma_device_type": 1 00:18:19.646 }, 00:18:19.646 { 00:18:19.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.647 "dma_device_type": 2 00:18:19.647 }, 00:18:19.647 { 00:18:19.647 "dma_device_id": "system", 00:18:19.647 "dma_device_type": 1 00:18:19.647 }, 00:18:19.647 { 00:18:19.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.647 "dma_device_type": 2 00:18:19.647 } 00:18:19.647 ], 00:18:19.647 "driver_specific": { 00:18:19.647 "raid": { 00:18:19.647 "uuid": "1b2ac832-55bf-4e95-8b2e-827d098e0a21", 00:18:19.647 "strip_size_kb": 64, 00:18:19.647 "state": "online", 00:18:19.647 "raid_level": "raid0", 00:18:19.647 "superblock": false, 00:18:19.647 "num_base_bdevs": 4, 00:18:19.647 "num_base_bdevs_discovered": 4, 00:18:19.647 "num_base_bdevs_operational": 4, 00:18:19.647 "base_bdevs_list": [ 00:18:19.647 { 00:18:19.647 "name": "NewBaseBdev", 00:18:19.647 "uuid": "e9e3892e-1fad-468b-9aa8-e40059cebf6e", 00:18:19.647 "is_configured": true, 00:18:19.647 "data_offset": 0, 00:18:19.647 "data_size": 65536 00:18:19.647 }, 00:18:19.647 { 00:18:19.647 "name": "BaseBdev2", 00:18:19.647 "uuid": "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e", 00:18:19.647 "is_configured": true, 00:18:19.647 "data_offset": 0, 00:18:19.647 "data_size": 65536 00:18:19.647 }, 00:18:19.647 { 00:18:19.647 "name": "BaseBdev3", 00:18:19.647 "uuid": "7c4f534c-e392-42ca-939d-642dc67d89a6", 00:18:19.647 "is_configured": true, 00:18:19.647 "data_offset": 0, 00:18:19.647 "data_size": 65536 00:18:19.647 }, 00:18:19.647 { 00:18:19.647 "name": "BaseBdev4", 00:18:19.647 "uuid": "f45b4125-48c3-4485-a020-1400b7596bd0", 00:18:19.647 "is_configured": true, 00:18:19.647 "data_offset": 0, 00:18:19.647 "data_size": 65536 00:18:19.647 } 00:18:19.647 ] 00:18:19.647 } 00:18:19.647 } 00:18:19.647 }' 00:18:19.647 13:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.647 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:19.647 BaseBdev2 00:18:19.647 BaseBdev3 00:18:19.647 BaseBdev4' 00:18:19.647 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:19.647 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:19.647 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.906 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.906 "name": "NewBaseBdev", 00:18:19.906 "aliases": [ 00:18:19.906 "e9e3892e-1fad-468b-9aa8-e40059cebf6e" 00:18:19.906 ], 00:18:19.906 "product_name": "Malloc disk", 00:18:19.906 "block_size": 512, 00:18:19.906 "num_blocks": 65536, 00:18:19.906 "uuid": "e9e3892e-1fad-468b-9aa8-e40059cebf6e", 00:18:19.906 "assigned_rate_limits": { 00:18:19.906 "rw_ios_per_sec": 0, 00:18:19.906 "rw_mbytes_per_sec": 0, 00:18:19.906 "r_mbytes_per_sec": 0, 00:18:19.906 "w_mbytes_per_sec": 0 00:18:19.906 }, 00:18:19.906 "claimed": true, 00:18:19.906 "claim_type": "exclusive_write", 00:18:19.906 "zoned": false, 00:18:19.906 "supported_io_types": { 00:18:19.906 "read": true, 00:18:19.906 "write": true, 00:18:19.906 "unmap": true, 00:18:19.906 "flush": true, 00:18:19.906 "reset": true, 00:18:19.906 "nvme_admin": false, 00:18:19.906 "nvme_io": false, 00:18:19.907 "nvme_io_md": false, 00:18:19.907 "write_zeroes": true, 00:18:19.907 "zcopy": true, 00:18:19.907 "get_zone_info": false, 00:18:19.907 "zone_management": false, 00:18:19.907 "zone_append": false, 00:18:19.907 "compare": false, 00:18:19.907 "compare_and_write": false, 00:18:19.907 "abort": true, 00:18:19.907 "seek_hole": false, 00:18:19.907 "seek_data": false, 00:18:19.907 "copy": true, 00:18:19.907 "nvme_iov_md": false 00:18:19.907 }, 00:18:19.907 "memory_domains": [ 00:18:19.907 { 00:18:19.907 "dma_device_id": "system", 00:18:19.907 "dma_device_type": 1 00:18:19.907 }, 00:18:19.907 { 00:18:19.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.907 "dma_device_type": 2 00:18:19.907 } 00:18:19.907 ], 00:18:19.907 "driver_specific": {} 00:18:19.907 }' 00:18:19.907 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.907 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.907 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.907 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.166 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.166 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:20.166 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.166 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.166 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.166 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.166 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.166 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.166 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:20.166 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:20.166 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:20.425 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:20.425 "name": "BaseBdev2", 00:18:20.425 "aliases": [ 00:18:20.425 "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e" 00:18:20.425 ], 00:18:20.425 "product_name": "Malloc disk", 00:18:20.425 "block_size": 512, 00:18:20.425 "num_blocks": 65536, 00:18:20.425 "uuid": "a6b8a6c0-3fd2-4890-9285-b34c3cc21c9e", 00:18:20.425 "assigned_rate_limits": { 00:18:20.425 "rw_ios_per_sec": 0, 00:18:20.425 "rw_mbytes_per_sec": 0, 00:18:20.425 "r_mbytes_per_sec": 0, 00:18:20.425 "w_mbytes_per_sec": 0 00:18:20.425 }, 00:18:20.425 "claimed": true, 00:18:20.425 "claim_type": "exclusive_write", 00:18:20.425 "zoned": false, 00:18:20.425 "supported_io_types": { 00:18:20.425 "read": true, 00:18:20.425 "write": true, 00:18:20.425 "unmap": true, 00:18:20.425 "flush": true, 00:18:20.425 "reset": true, 00:18:20.425 "nvme_admin": false, 00:18:20.425 "nvme_io": false, 00:18:20.425 "nvme_io_md": false, 00:18:20.425 "write_zeroes": true, 00:18:20.425 "zcopy": true, 00:18:20.425 "get_zone_info": false, 00:18:20.425 "zone_management": false, 00:18:20.425 "zone_append": false, 00:18:20.425 "compare": false, 00:18:20.425 "compare_and_write": false, 00:18:20.425 "abort": true, 00:18:20.425 "seek_hole": false, 00:18:20.425 "seek_data": false, 00:18:20.425 "copy": true, 00:18:20.425 "nvme_iov_md": false 00:18:20.425 }, 00:18:20.425 "memory_domains": [ 00:18:20.425 { 00:18:20.425 "dma_device_id": "system", 00:18:20.425 "dma_device_type": 1 00:18:20.425 }, 00:18:20.425 { 00:18:20.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.425 "dma_device_type": 2 00:18:20.425 } 00:18:20.425 ], 00:18:20.425 "driver_specific": {} 00:18:20.425 }' 00:18:20.425 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.425 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.684 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:20.684 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.684 13:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.684 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:20.684 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.684 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.684 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.684 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.684 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.941 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.941 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:20.941 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:20.941 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:20.941 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:20.941 "name": "BaseBdev3", 00:18:20.941 "aliases": [ 00:18:20.941 "7c4f534c-e392-42ca-939d-642dc67d89a6" 00:18:20.941 ], 00:18:20.941 "product_name": "Malloc disk", 00:18:20.941 "block_size": 512, 00:18:20.941 "num_blocks": 65536, 00:18:20.941 "uuid": "7c4f534c-e392-42ca-939d-642dc67d89a6", 00:18:20.941 "assigned_rate_limits": { 00:18:20.941 "rw_ios_per_sec": 0, 00:18:20.941 "rw_mbytes_per_sec": 0, 00:18:20.941 "r_mbytes_per_sec": 0, 00:18:20.941 "w_mbytes_per_sec": 0 00:18:20.941 }, 00:18:20.941 "claimed": true, 00:18:20.941 "claim_type": "exclusive_write", 00:18:20.941 "zoned": false, 00:18:20.941 "supported_io_types": { 00:18:20.941 "read": true, 00:18:20.941 "write": true, 00:18:20.941 "unmap": true, 00:18:20.941 "flush": true, 00:18:20.941 "reset": true, 00:18:20.941 "nvme_admin": false, 00:18:20.941 "nvme_io": false, 00:18:20.941 "nvme_io_md": false, 00:18:20.941 "write_zeroes": true, 00:18:20.941 "zcopy": true, 00:18:20.941 "get_zone_info": false, 00:18:20.941 "zone_management": false, 00:18:20.941 "zone_append": false, 00:18:20.941 "compare": false, 00:18:20.941 "compare_and_write": false, 00:18:20.941 "abort": true, 00:18:20.941 "seek_hole": false, 00:18:20.941 "seek_data": false, 00:18:20.941 "copy": true, 00:18:20.941 "nvme_iov_md": false 00:18:20.941 }, 00:18:20.941 "memory_domains": [ 00:18:20.941 { 00:18:20.941 "dma_device_id": "system", 00:18:20.941 "dma_device_type": 1 00:18:20.941 }, 00:18:20.941 { 00:18:20.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.941 "dma_device_type": 2 00:18:20.941 } 00:18:20.941 ], 00:18:20.941 "driver_specific": {} 00:18:20.941 }' 00:18:20.941 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:21.200 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:21.200 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:21.200 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:21.200 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:21.200 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:21.200 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:21.200 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:21.200 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:21.200 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:21.458 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:21.458 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:21.458 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:21.458 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:21.458 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:21.717 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:21.717 "name": "BaseBdev4", 00:18:21.717 "aliases": [ 00:18:21.717 "f45b4125-48c3-4485-a020-1400b7596bd0" 00:18:21.717 ], 00:18:21.717 "product_name": "Malloc disk", 00:18:21.717 "block_size": 512, 00:18:21.717 "num_blocks": 65536, 00:18:21.717 "uuid": "f45b4125-48c3-4485-a020-1400b7596bd0", 00:18:21.717 "assigned_rate_limits": { 00:18:21.717 "rw_ios_per_sec": 0, 00:18:21.717 "rw_mbytes_per_sec": 0, 00:18:21.717 "r_mbytes_per_sec": 0, 00:18:21.717 "w_mbytes_per_sec": 0 00:18:21.717 }, 00:18:21.717 "claimed": true, 00:18:21.717 "claim_type": "exclusive_write", 00:18:21.717 "zoned": false, 00:18:21.717 "supported_io_types": { 00:18:21.717 "read": true, 00:18:21.717 "write": true, 00:18:21.717 "unmap": true, 00:18:21.717 "flush": true, 00:18:21.717 "reset": true, 00:18:21.717 "nvme_admin": false, 00:18:21.717 "nvme_io": false, 00:18:21.717 "nvme_io_md": false, 00:18:21.717 "write_zeroes": true, 00:18:21.717 "zcopy": true, 00:18:21.717 "get_zone_info": false, 00:18:21.717 "zone_management": false, 00:18:21.717 "zone_append": false, 00:18:21.717 "compare": false, 00:18:21.717 "compare_and_write": false, 00:18:21.717 "abort": true, 00:18:21.717 "seek_hole": false, 00:18:21.717 "seek_data": false, 00:18:21.717 "copy": true, 00:18:21.717 "nvme_iov_md": false 00:18:21.717 }, 00:18:21.717 "memory_domains": [ 00:18:21.717 { 00:18:21.717 "dma_device_id": "system", 00:18:21.717 "dma_device_type": 1 00:18:21.717 }, 00:18:21.717 { 00:18:21.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.717 "dma_device_type": 2 00:18:21.717 } 00:18:21.717 ], 00:18:21.717 "driver_specific": {} 00:18:21.717 }' 00:18:21.717 13:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:21.717 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:21.717 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:21.717 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:21.717 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:21.717 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:21.717 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:21.717 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:21.976 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:21.976 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:21.976 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:21.976 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:21.976 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:22.235 [2024-07-25 13:18:32.524965] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:22.235 [2024-07-25 13:18:32.524989] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.235 [2024-07-25 13:18:32.525032] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.235 [2024-07-25 13:18:32.525083] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.235 [2024-07-25 13:18:32.525094] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c3dcb0 name Existed_Raid, state offline 00:18:22.235 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 899543 00:18:22.235 13:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 899543 ']' 00:18:22.235 13:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 899543 00:18:22.235 13:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:18:22.235 13:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.235 13:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 899543 00:18:22.235 13:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:22.235 13:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:22.235 13:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 899543' 00:18:22.235 killing process with pid 899543 00:18:22.235 13:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 899543 00:18:22.235 [2024-07-25 13:18:32.598607] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:22.235 13:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 899543 00:18:22.235 [2024-07-25 13:18:32.628632] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.494 13:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:22.494 00:18:22.494 real 0m31.601s 00:18:22.494 user 0m58.113s 00:18:22.494 sys 0m5.558s 00:18:22.494 13:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:22.494 13:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.494 ************************************ 00:18:22.494 END TEST raid_state_function_test 00:18:22.494 ************************************ 00:18:22.495 13:18:32 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:22.495 13:18:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:22.495 13:18:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:22.495 13:18:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.495 ************************************ 00:18:22.495 START TEST raid_state_function_test_sb 00:18:22.495 ************************************ 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=905964 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 905964' 00:18:22.495 Process raid pid: 905964 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 905964 /var/tmp/spdk-raid.sock 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 905964 ']' 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:22.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.495 13:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.495 [2024-07-25 13:18:32.963640] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:18:22.495 [2024-07-25 13:18:32.963697] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:22.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:22.755 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:22.755 [2024-07-25 13:18:33.083889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.755 [2024-07-25 13:18:33.169996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.755 [2024-07-25 13:18:33.229527] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.755 [2024-07-25 13:18:33.229562] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.692 13:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.692 13:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:23.692 13:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:23.692 [2024-07-25 13:18:34.045171] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:23.692 [2024-07-25 13:18:34.045207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:23.692 [2024-07-25 13:18:34.045217] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.692 [2024-07-25 13:18:34.045228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.692 [2024-07-25 13:18:34.045236] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:23.692 [2024-07-25 13:18:34.045246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:23.692 [2024-07-25 13:18:34.045254] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:23.692 [2024-07-25 13:18:34.045264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:23.692 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:23.692 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:23.692 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:23.692 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:23.692 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:23.692 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:23.692 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.692 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.692 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.692 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.692 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.692 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.951 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.951 "name": "Existed_Raid", 00:18:23.951 "uuid": "576163e3-abef-4f0e-88a4-1c8e50c902ab", 00:18:23.951 "strip_size_kb": 64, 00:18:23.951 "state": "configuring", 00:18:23.951 "raid_level": "raid0", 00:18:23.951 "superblock": true, 00:18:23.951 "num_base_bdevs": 4, 00:18:23.951 "num_base_bdevs_discovered": 0, 00:18:23.951 "num_base_bdevs_operational": 4, 00:18:23.951 "base_bdevs_list": [ 00:18:23.951 { 00:18:23.951 "name": "BaseBdev1", 00:18:23.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.951 "is_configured": false, 00:18:23.951 "data_offset": 0, 00:18:23.951 "data_size": 0 00:18:23.951 }, 00:18:23.951 { 00:18:23.951 "name": "BaseBdev2", 00:18:23.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.951 "is_configured": false, 00:18:23.951 "data_offset": 0, 00:18:23.951 "data_size": 0 00:18:23.951 }, 00:18:23.951 { 00:18:23.951 "name": "BaseBdev3", 00:18:23.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.951 "is_configured": false, 00:18:23.951 "data_offset": 0, 00:18:23.951 "data_size": 0 00:18:23.951 }, 00:18:23.951 { 00:18:23.951 "name": "BaseBdev4", 00:18:23.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.951 "is_configured": false, 00:18:23.951 "data_offset": 0, 00:18:23.951 "data_size": 0 00:18:23.951 } 00:18:23.951 ] 00:18:23.951 }' 00:18:23.951 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.951 13:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.554 13:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:24.820 [2024-07-25 13:18:35.047668] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:24.820 [2024-07-25 13:18:35.047699] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1455f60 name Existed_Raid, state configuring 00:18:24.820 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:24.820 [2024-07-25 13:18:35.272282] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:24.820 [2024-07-25 13:18:35.272306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:24.820 [2024-07-25 13:18:35.272315] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.820 [2024-07-25 13:18:35.272326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.820 [2024-07-25 13:18:35.272334] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:24.820 [2024-07-25 13:18:35.272343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:24.820 [2024-07-25 13:18:35.272352] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:24.820 [2024-07-25 13:18:35.272362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:24.820 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:25.078 [2024-07-25 13:18:35.518472] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.078 BaseBdev1 00:18:25.078 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:25.078 13:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:25.078 13:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:25.078 13:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:25.078 13:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:25.078 13:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:25.078 13:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:25.337 13:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:25.596 [ 00:18:25.596 { 00:18:25.596 "name": "BaseBdev1", 00:18:25.596 "aliases": [ 00:18:25.596 "c251d64f-a02c-4066-bdbd-70a95225f136" 00:18:25.596 ], 00:18:25.596 "product_name": "Malloc disk", 00:18:25.596 "block_size": 512, 00:18:25.596 "num_blocks": 65536, 00:18:25.596 "uuid": "c251d64f-a02c-4066-bdbd-70a95225f136", 00:18:25.596 "assigned_rate_limits": { 00:18:25.596 "rw_ios_per_sec": 0, 00:18:25.596 "rw_mbytes_per_sec": 0, 00:18:25.596 "r_mbytes_per_sec": 0, 00:18:25.596 "w_mbytes_per_sec": 0 00:18:25.596 }, 00:18:25.596 "claimed": true, 00:18:25.596 "claim_type": "exclusive_write", 00:18:25.596 "zoned": false, 00:18:25.596 "supported_io_types": { 00:18:25.596 "read": true, 00:18:25.596 "write": true, 00:18:25.596 "unmap": true, 00:18:25.596 "flush": true, 00:18:25.596 "reset": true, 00:18:25.596 "nvme_admin": false, 00:18:25.596 "nvme_io": false, 00:18:25.596 "nvme_io_md": false, 00:18:25.596 "write_zeroes": true, 00:18:25.596 "zcopy": true, 00:18:25.596 "get_zone_info": false, 00:18:25.596 "zone_management": false, 00:18:25.596 "zone_append": false, 00:18:25.596 "compare": false, 00:18:25.596 "compare_and_write": false, 00:18:25.596 "abort": true, 00:18:25.596 "seek_hole": false, 00:18:25.596 "seek_data": false, 00:18:25.596 "copy": true, 00:18:25.596 "nvme_iov_md": false 00:18:25.596 }, 00:18:25.596 "memory_domains": [ 00:18:25.596 { 00:18:25.596 "dma_device_id": "system", 00:18:25.596 "dma_device_type": 1 00:18:25.596 }, 00:18:25.596 { 00:18:25.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.596 "dma_device_type": 2 00:18:25.596 } 00:18:25.596 ], 00:18:25.596 "driver_specific": {} 00:18:25.596 } 00:18:25.596 ] 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.596 13:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.856 13:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.856 "name": "Existed_Raid", 00:18:25.856 "uuid": "c5cbc04d-0e4b-49c2-91f4-68ecbc33fa4e", 00:18:25.856 "strip_size_kb": 64, 00:18:25.856 "state": "configuring", 00:18:25.856 "raid_level": "raid0", 00:18:25.856 "superblock": true, 00:18:25.856 "num_base_bdevs": 4, 00:18:25.856 "num_base_bdevs_discovered": 1, 00:18:25.856 "num_base_bdevs_operational": 4, 00:18:25.856 "base_bdevs_list": [ 00:18:25.856 { 00:18:25.856 "name": "BaseBdev1", 00:18:25.856 "uuid": "c251d64f-a02c-4066-bdbd-70a95225f136", 00:18:25.856 "is_configured": true, 00:18:25.856 "data_offset": 2048, 00:18:25.856 "data_size": 63488 00:18:25.856 }, 00:18:25.856 { 00:18:25.856 "name": "BaseBdev2", 00:18:25.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.856 "is_configured": false, 00:18:25.856 "data_offset": 0, 00:18:25.856 "data_size": 0 00:18:25.856 }, 00:18:25.856 { 00:18:25.856 "name": "BaseBdev3", 00:18:25.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.856 "is_configured": false, 00:18:25.856 "data_offset": 0, 00:18:25.856 "data_size": 0 00:18:25.856 }, 00:18:25.856 { 00:18:25.856 "name": "BaseBdev4", 00:18:25.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.856 "is_configured": false, 00:18:25.856 "data_offset": 0, 00:18:25.856 "data_size": 0 00:18:25.856 } 00:18:25.856 ] 00:18:25.856 }' 00:18:25.856 13:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.856 13:18:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.424 13:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:26.683 [2024-07-25 13:18:36.990347] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:26.683 [2024-07-25 13:18:36.990382] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14557d0 name Existed_Raid, state configuring 00:18:26.683 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:26.943 [2024-07-25 13:18:37.218982] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.943 [2024-07-25 13:18:37.220364] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.943 [2024-07-25 13:18:37.220394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.943 [2024-07-25 13:18:37.220404] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:26.943 [2024-07-25 13:18:37.220414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:26.943 [2024-07-25 13:18:37.220423] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:26.943 [2024-07-25 13:18:37.220433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.943 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.202 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.202 "name": "Existed_Raid", 00:18:27.202 "uuid": "c4229ed3-021f-4264-9c61-b88d713e0163", 00:18:27.202 "strip_size_kb": 64, 00:18:27.202 "state": "configuring", 00:18:27.202 "raid_level": "raid0", 00:18:27.202 "superblock": true, 00:18:27.202 "num_base_bdevs": 4, 00:18:27.202 "num_base_bdevs_discovered": 1, 00:18:27.202 "num_base_bdevs_operational": 4, 00:18:27.202 "base_bdevs_list": [ 00:18:27.202 { 00:18:27.202 "name": "BaseBdev1", 00:18:27.202 "uuid": "c251d64f-a02c-4066-bdbd-70a95225f136", 00:18:27.202 "is_configured": true, 00:18:27.202 "data_offset": 2048, 00:18:27.202 "data_size": 63488 00:18:27.202 }, 00:18:27.202 { 00:18:27.202 "name": "BaseBdev2", 00:18:27.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.202 "is_configured": false, 00:18:27.202 "data_offset": 0, 00:18:27.202 "data_size": 0 00:18:27.202 }, 00:18:27.202 { 00:18:27.202 "name": "BaseBdev3", 00:18:27.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.202 "is_configured": false, 00:18:27.202 "data_offset": 0, 00:18:27.202 "data_size": 0 00:18:27.202 }, 00:18:27.202 { 00:18:27.202 "name": "BaseBdev4", 00:18:27.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.202 "is_configured": false, 00:18:27.202 "data_offset": 0, 00:18:27.202 "data_size": 0 00:18:27.202 } 00:18:27.202 ] 00:18:27.202 }' 00:18:27.202 13:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.202 13:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.771 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:27.771 [2024-07-25 13:18:38.244743] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.771 BaseBdev2 00:18:28.030 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:28.030 13:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:28.030 13:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:28.030 13:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:28.030 13:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:28.030 13:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:28.030 13:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.030 13:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:28.289 [ 00:18:28.289 { 00:18:28.289 "name": "BaseBdev2", 00:18:28.289 "aliases": [ 00:18:28.289 "86671356-f1a1-438d-b8b3-9c105964628e" 00:18:28.289 ], 00:18:28.289 "product_name": "Malloc disk", 00:18:28.289 "block_size": 512, 00:18:28.289 "num_blocks": 65536, 00:18:28.289 "uuid": "86671356-f1a1-438d-b8b3-9c105964628e", 00:18:28.289 "assigned_rate_limits": { 00:18:28.289 "rw_ios_per_sec": 0, 00:18:28.289 "rw_mbytes_per_sec": 0, 00:18:28.289 "r_mbytes_per_sec": 0, 00:18:28.289 "w_mbytes_per_sec": 0 00:18:28.289 }, 00:18:28.289 "claimed": true, 00:18:28.289 "claim_type": "exclusive_write", 00:18:28.289 "zoned": false, 00:18:28.289 "supported_io_types": { 00:18:28.289 "read": true, 00:18:28.289 "write": true, 00:18:28.289 "unmap": true, 00:18:28.289 "flush": true, 00:18:28.289 "reset": true, 00:18:28.289 "nvme_admin": false, 00:18:28.289 "nvme_io": false, 00:18:28.289 "nvme_io_md": false, 00:18:28.289 "write_zeroes": true, 00:18:28.289 "zcopy": true, 00:18:28.289 "get_zone_info": false, 00:18:28.289 "zone_management": false, 00:18:28.289 "zone_append": false, 00:18:28.289 "compare": false, 00:18:28.289 "compare_and_write": false, 00:18:28.289 "abort": true, 00:18:28.289 "seek_hole": false, 00:18:28.289 "seek_data": false, 00:18:28.289 "copy": true, 00:18:28.289 "nvme_iov_md": false 00:18:28.289 }, 00:18:28.289 "memory_domains": [ 00:18:28.289 { 00:18:28.289 "dma_device_id": "system", 00:18:28.289 "dma_device_type": 1 00:18:28.289 }, 00:18:28.289 { 00:18:28.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.289 "dma_device_type": 2 00:18:28.289 } 00:18:28.289 ], 00:18:28.289 "driver_specific": {} 00:18:28.289 } 00:18:28.289 ] 00:18:28.289 13:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:28.289 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:28.289 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:28.290 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:28.290 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:28.290 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:28.290 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:28.290 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:28.290 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:28.290 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.290 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.290 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.290 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.290 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.290 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.548 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.548 "name": "Existed_Raid", 00:18:28.548 "uuid": "c4229ed3-021f-4264-9c61-b88d713e0163", 00:18:28.548 "strip_size_kb": 64, 00:18:28.548 "state": "configuring", 00:18:28.548 "raid_level": "raid0", 00:18:28.548 "superblock": true, 00:18:28.548 "num_base_bdevs": 4, 00:18:28.548 "num_base_bdevs_discovered": 2, 00:18:28.548 "num_base_bdevs_operational": 4, 00:18:28.548 "base_bdevs_list": [ 00:18:28.548 { 00:18:28.548 "name": "BaseBdev1", 00:18:28.548 "uuid": "c251d64f-a02c-4066-bdbd-70a95225f136", 00:18:28.548 "is_configured": true, 00:18:28.548 "data_offset": 2048, 00:18:28.548 "data_size": 63488 00:18:28.549 }, 00:18:28.549 { 00:18:28.549 "name": "BaseBdev2", 00:18:28.549 "uuid": "86671356-f1a1-438d-b8b3-9c105964628e", 00:18:28.549 "is_configured": true, 00:18:28.549 "data_offset": 2048, 00:18:28.549 "data_size": 63488 00:18:28.549 }, 00:18:28.549 { 00:18:28.549 "name": "BaseBdev3", 00:18:28.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.549 "is_configured": false, 00:18:28.549 "data_offset": 0, 00:18:28.549 "data_size": 0 00:18:28.549 }, 00:18:28.549 { 00:18:28.549 "name": "BaseBdev4", 00:18:28.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.549 "is_configured": false, 00:18:28.549 "data_offset": 0, 00:18:28.549 "data_size": 0 00:18:28.549 } 00:18:28.549 ] 00:18:28.549 }' 00:18:28.549 13:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.549 13:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.116 13:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:29.375 [2024-07-25 13:18:39.679790] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:29.375 BaseBdev3 00:18:29.375 13:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:29.375 13:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:29.375 13:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:29.375 13:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:29.375 13:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:29.375 13:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:29.375 13:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.634 13:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:29.635 [ 00:18:29.635 { 00:18:29.635 "name": "BaseBdev3", 00:18:29.635 "aliases": [ 00:18:29.635 "fdc9f5cd-669b-49e8-b22c-1f6d54cb1f39" 00:18:29.635 ], 00:18:29.635 "product_name": "Malloc disk", 00:18:29.635 "block_size": 512, 00:18:29.635 "num_blocks": 65536, 00:18:29.635 "uuid": "fdc9f5cd-669b-49e8-b22c-1f6d54cb1f39", 00:18:29.635 "assigned_rate_limits": { 00:18:29.635 "rw_ios_per_sec": 0, 00:18:29.635 "rw_mbytes_per_sec": 0, 00:18:29.635 "r_mbytes_per_sec": 0, 00:18:29.635 "w_mbytes_per_sec": 0 00:18:29.635 }, 00:18:29.635 "claimed": true, 00:18:29.635 "claim_type": "exclusive_write", 00:18:29.635 "zoned": false, 00:18:29.635 "supported_io_types": { 00:18:29.635 "read": true, 00:18:29.635 "write": true, 00:18:29.635 "unmap": true, 00:18:29.635 "flush": true, 00:18:29.635 "reset": true, 00:18:29.635 "nvme_admin": false, 00:18:29.635 "nvme_io": false, 00:18:29.635 "nvme_io_md": false, 00:18:29.635 "write_zeroes": true, 00:18:29.635 "zcopy": true, 00:18:29.635 "get_zone_info": false, 00:18:29.635 "zone_management": false, 00:18:29.635 "zone_append": false, 00:18:29.635 "compare": false, 00:18:29.635 "compare_and_write": false, 00:18:29.635 "abort": true, 00:18:29.635 "seek_hole": false, 00:18:29.635 "seek_data": false, 00:18:29.635 "copy": true, 00:18:29.635 "nvme_iov_md": false 00:18:29.635 }, 00:18:29.635 "memory_domains": [ 00:18:29.635 { 00:18:29.635 "dma_device_id": "system", 00:18:29.635 "dma_device_type": 1 00:18:29.635 }, 00:18:29.635 { 00:18:29.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.635 "dma_device_type": 2 00:18:29.635 } 00:18:29.635 ], 00:18:29.635 "driver_specific": {} 00:18:29.635 } 00:18:29.635 ] 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.635 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.894 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.894 "name": "Existed_Raid", 00:18:29.894 "uuid": "c4229ed3-021f-4264-9c61-b88d713e0163", 00:18:29.894 "strip_size_kb": 64, 00:18:29.894 "state": "configuring", 00:18:29.894 "raid_level": "raid0", 00:18:29.894 "superblock": true, 00:18:29.894 "num_base_bdevs": 4, 00:18:29.894 "num_base_bdevs_discovered": 3, 00:18:29.894 "num_base_bdevs_operational": 4, 00:18:29.894 "base_bdevs_list": [ 00:18:29.894 { 00:18:29.894 "name": "BaseBdev1", 00:18:29.894 "uuid": "c251d64f-a02c-4066-bdbd-70a95225f136", 00:18:29.894 "is_configured": true, 00:18:29.894 "data_offset": 2048, 00:18:29.894 "data_size": 63488 00:18:29.894 }, 00:18:29.894 { 00:18:29.894 "name": "BaseBdev2", 00:18:29.894 "uuid": "86671356-f1a1-438d-b8b3-9c105964628e", 00:18:29.894 "is_configured": true, 00:18:29.894 "data_offset": 2048, 00:18:29.894 "data_size": 63488 00:18:29.894 }, 00:18:29.894 { 00:18:29.894 "name": "BaseBdev3", 00:18:29.894 "uuid": "fdc9f5cd-669b-49e8-b22c-1f6d54cb1f39", 00:18:29.894 "is_configured": true, 00:18:29.894 "data_offset": 2048, 00:18:29.894 "data_size": 63488 00:18:29.894 }, 00:18:29.894 { 00:18:29.894 "name": "BaseBdev4", 00:18:29.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.894 "is_configured": false, 00:18:29.894 "data_offset": 0, 00:18:29.894 "data_size": 0 00:18:29.894 } 00:18:29.894 ] 00:18:29.894 }' 00:18:29.894 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.894 13:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.463 13:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:30.723 [2024-07-25 13:18:41.090627] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:30.723 [2024-07-25 13:18:41.090783] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1456840 00:18:30.723 [2024-07-25 13:18:41.090796] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:30.723 [2024-07-25 13:18:41.090954] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1456480 00:18:30.723 [2024-07-25 13:18:41.091067] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1456840 00:18:30.723 [2024-07-25 13:18:41.091076] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1456840 00:18:30.723 [2024-07-25 13:18:41.091172] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.723 BaseBdev4 00:18:30.723 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:18:30.723 13:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:30.723 13:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:30.723 13:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:30.723 13:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:30.723 13:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:30.723 13:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:30.981 13:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:31.240 [ 00:18:31.240 { 00:18:31.240 "name": "BaseBdev4", 00:18:31.240 "aliases": [ 00:18:31.240 "425d4648-bdc4-4e99-912d-b2062a3632c0" 00:18:31.240 ], 00:18:31.240 "product_name": "Malloc disk", 00:18:31.240 "block_size": 512, 00:18:31.240 "num_blocks": 65536, 00:18:31.240 "uuid": "425d4648-bdc4-4e99-912d-b2062a3632c0", 00:18:31.240 "assigned_rate_limits": { 00:18:31.240 "rw_ios_per_sec": 0, 00:18:31.240 "rw_mbytes_per_sec": 0, 00:18:31.240 "r_mbytes_per_sec": 0, 00:18:31.240 "w_mbytes_per_sec": 0 00:18:31.240 }, 00:18:31.240 "claimed": true, 00:18:31.240 "claim_type": "exclusive_write", 00:18:31.240 "zoned": false, 00:18:31.240 "supported_io_types": { 00:18:31.240 "read": true, 00:18:31.240 "write": true, 00:18:31.240 "unmap": true, 00:18:31.240 "flush": true, 00:18:31.240 "reset": true, 00:18:31.240 "nvme_admin": false, 00:18:31.240 "nvme_io": false, 00:18:31.240 "nvme_io_md": false, 00:18:31.240 "write_zeroes": true, 00:18:31.240 "zcopy": true, 00:18:31.240 "get_zone_info": false, 00:18:31.240 "zone_management": false, 00:18:31.240 "zone_append": false, 00:18:31.240 "compare": false, 00:18:31.240 "compare_and_write": false, 00:18:31.240 "abort": true, 00:18:31.240 "seek_hole": false, 00:18:31.240 "seek_data": false, 00:18:31.240 "copy": true, 00:18:31.240 "nvme_iov_md": false 00:18:31.240 }, 00:18:31.240 "memory_domains": [ 00:18:31.240 { 00:18:31.240 "dma_device_id": "system", 00:18:31.240 "dma_device_type": 1 00:18:31.240 }, 00:18:31.240 { 00:18:31.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.240 "dma_device_type": 2 00:18:31.240 } 00:18:31.240 ], 00:18:31.240 "driver_specific": {} 00:18:31.240 } 00:18:31.240 ] 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.240 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.499 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.499 "name": "Existed_Raid", 00:18:31.499 "uuid": "c4229ed3-021f-4264-9c61-b88d713e0163", 00:18:31.499 "strip_size_kb": 64, 00:18:31.499 "state": "online", 00:18:31.499 "raid_level": "raid0", 00:18:31.499 "superblock": true, 00:18:31.499 "num_base_bdevs": 4, 00:18:31.499 "num_base_bdevs_discovered": 4, 00:18:31.499 "num_base_bdevs_operational": 4, 00:18:31.499 "base_bdevs_list": [ 00:18:31.499 { 00:18:31.499 "name": "BaseBdev1", 00:18:31.499 "uuid": "c251d64f-a02c-4066-bdbd-70a95225f136", 00:18:31.499 "is_configured": true, 00:18:31.499 "data_offset": 2048, 00:18:31.499 "data_size": 63488 00:18:31.499 }, 00:18:31.500 { 00:18:31.500 "name": "BaseBdev2", 00:18:31.500 "uuid": "86671356-f1a1-438d-b8b3-9c105964628e", 00:18:31.500 "is_configured": true, 00:18:31.500 "data_offset": 2048, 00:18:31.500 "data_size": 63488 00:18:31.500 }, 00:18:31.500 { 00:18:31.500 "name": "BaseBdev3", 00:18:31.500 "uuid": "fdc9f5cd-669b-49e8-b22c-1f6d54cb1f39", 00:18:31.500 "is_configured": true, 00:18:31.500 "data_offset": 2048, 00:18:31.500 "data_size": 63488 00:18:31.500 }, 00:18:31.500 { 00:18:31.500 "name": "BaseBdev4", 00:18:31.500 "uuid": "425d4648-bdc4-4e99-912d-b2062a3632c0", 00:18:31.500 "is_configured": true, 00:18:31.500 "data_offset": 2048, 00:18:31.500 "data_size": 63488 00:18:31.500 } 00:18:31.500 ] 00:18:31.500 }' 00:18:31.500 13:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.500 13:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.067 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:32.067 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:32.067 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:32.067 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:32.067 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:32.067 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:32.067 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:32.067 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:32.067 [2024-07-25 13:18:42.534746] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.327 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:32.327 "name": "Existed_Raid", 00:18:32.327 "aliases": [ 00:18:32.327 "c4229ed3-021f-4264-9c61-b88d713e0163" 00:18:32.327 ], 00:18:32.327 "product_name": "Raid Volume", 00:18:32.327 "block_size": 512, 00:18:32.327 "num_blocks": 253952, 00:18:32.327 "uuid": "c4229ed3-021f-4264-9c61-b88d713e0163", 00:18:32.327 "assigned_rate_limits": { 00:18:32.327 "rw_ios_per_sec": 0, 00:18:32.327 "rw_mbytes_per_sec": 0, 00:18:32.327 "r_mbytes_per_sec": 0, 00:18:32.327 "w_mbytes_per_sec": 0 00:18:32.327 }, 00:18:32.327 "claimed": false, 00:18:32.327 "zoned": false, 00:18:32.327 "supported_io_types": { 00:18:32.327 "read": true, 00:18:32.327 "write": true, 00:18:32.327 "unmap": true, 00:18:32.327 "flush": true, 00:18:32.327 "reset": true, 00:18:32.327 "nvme_admin": false, 00:18:32.327 "nvme_io": false, 00:18:32.327 "nvme_io_md": false, 00:18:32.327 "write_zeroes": true, 00:18:32.327 "zcopy": false, 00:18:32.327 "get_zone_info": false, 00:18:32.327 "zone_management": false, 00:18:32.327 "zone_append": false, 00:18:32.327 "compare": false, 00:18:32.327 "compare_and_write": false, 00:18:32.327 "abort": false, 00:18:32.327 "seek_hole": false, 00:18:32.327 "seek_data": false, 00:18:32.327 "copy": false, 00:18:32.327 "nvme_iov_md": false 00:18:32.327 }, 00:18:32.327 "memory_domains": [ 00:18:32.327 { 00:18:32.327 "dma_device_id": "system", 00:18:32.327 "dma_device_type": 1 00:18:32.327 }, 00:18:32.327 { 00:18:32.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.327 "dma_device_type": 2 00:18:32.327 }, 00:18:32.327 { 00:18:32.327 "dma_device_id": "system", 00:18:32.327 "dma_device_type": 1 00:18:32.327 }, 00:18:32.327 { 00:18:32.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.327 "dma_device_type": 2 00:18:32.327 }, 00:18:32.327 { 00:18:32.327 "dma_device_id": "system", 00:18:32.327 "dma_device_type": 1 00:18:32.327 }, 00:18:32.327 { 00:18:32.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.327 "dma_device_type": 2 00:18:32.327 }, 00:18:32.327 { 00:18:32.327 "dma_device_id": "system", 00:18:32.327 "dma_device_type": 1 00:18:32.327 }, 00:18:32.327 { 00:18:32.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.327 "dma_device_type": 2 00:18:32.327 } 00:18:32.327 ], 00:18:32.327 "driver_specific": { 00:18:32.327 "raid": { 00:18:32.327 "uuid": "c4229ed3-021f-4264-9c61-b88d713e0163", 00:18:32.327 "strip_size_kb": 64, 00:18:32.327 "state": "online", 00:18:32.327 "raid_level": "raid0", 00:18:32.327 "superblock": true, 00:18:32.327 "num_base_bdevs": 4, 00:18:32.327 "num_base_bdevs_discovered": 4, 00:18:32.327 "num_base_bdevs_operational": 4, 00:18:32.327 "base_bdevs_list": [ 00:18:32.327 { 00:18:32.327 "name": "BaseBdev1", 00:18:32.327 "uuid": "c251d64f-a02c-4066-bdbd-70a95225f136", 00:18:32.327 "is_configured": true, 00:18:32.327 "data_offset": 2048, 00:18:32.327 "data_size": 63488 00:18:32.327 }, 00:18:32.327 { 00:18:32.327 "name": "BaseBdev2", 00:18:32.327 "uuid": "86671356-f1a1-438d-b8b3-9c105964628e", 00:18:32.327 "is_configured": true, 00:18:32.327 "data_offset": 2048, 00:18:32.327 "data_size": 63488 00:18:32.327 }, 00:18:32.327 { 00:18:32.327 "name": "BaseBdev3", 00:18:32.327 "uuid": "fdc9f5cd-669b-49e8-b22c-1f6d54cb1f39", 00:18:32.327 "is_configured": true, 00:18:32.327 "data_offset": 2048, 00:18:32.327 "data_size": 63488 00:18:32.327 }, 00:18:32.327 { 00:18:32.327 "name": "BaseBdev4", 00:18:32.327 "uuid": "425d4648-bdc4-4e99-912d-b2062a3632c0", 00:18:32.327 "is_configured": true, 00:18:32.327 "data_offset": 2048, 00:18:32.327 "data_size": 63488 00:18:32.327 } 00:18:32.327 ] 00:18:32.327 } 00:18:32.327 } 00:18:32.327 }' 00:18:32.327 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:32.327 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:32.327 BaseBdev2 00:18:32.327 BaseBdev3 00:18:32.327 BaseBdev4' 00:18:32.327 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:32.327 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:32.327 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:32.586 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:32.586 "name": "BaseBdev1", 00:18:32.586 "aliases": [ 00:18:32.586 "c251d64f-a02c-4066-bdbd-70a95225f136" 00:18:32.586 ], 00:18:32.586 "product_name": "Malloc disk", 00:18:32.586 "block_size": 512, 00:18:32.586 "num_blocks": 65536, 00:18:32.586 "uuid": "c251d64f-a02c-4066-bdbd-70a95225f136", 00:18:32.586 "assigned_rate_limits": { 00:18:32.586 "rw_ios_per_sec": 0, 00:18:32.586 "rw_mbytes_per_sec": 0, 00:18:32.586 "r_mbytes_per_sec": 0, 00:18:32.586 "w_mbytes_per_sec": 0 00:18:32.586 }, 00:18:32.586 "claimed": true, 00:18:32.586 "claim_type": "exclusive_write", 00:18:32.586 "zoned": false, 00:18:32.586 "supported_io_types": { 00:18:32.586 "read": true, 00:18:32.586 "write": true, 00:18:32.586 "unmap": true, 00:18:32.586 "flush": true, 00:18:32.586 "reset": true, 00:18:32.586 "nvme_admin": false, 00:18:32.586 "nvme_io": false, 00:18:32.586 "nvme_io_md": false, 00:18:32.586 "write_zeroes": true, 00:18:32.586 "zcopy": true, 00:18:32.586 "get_zone_info": false, 00:18:32.586 "zone_management": false, 00:18:32.586 "zone_append": false, 00:18:32.586 "compare": false, 00:18:32.586 "compare_and_write": false, 00:18:32.586 "abort": true, 00:18:32.586 "seek_hole": false, 00:18:32.586 "seek_data": false, 00:18:32.586 "copy": true, 00:18:32.586 "nvme_iov_md": false 00:18:32.586 }, 00:18:32.586 "memory_domains": [ 00:18:32.586 { 00:18:32.586 "dma_device_id": "system", 00:18:32.586 "dma_device_type": 1 00:18:32.586 }, 00:18:32.586 { 00:18:32.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.586 "dma_device_type": 2 00:18:32.586 } 00:18:32.586 ], 00:18:32.586 "driver_specific": {} 00:18:32.586 }' 00:18:32.586 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.586 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.586 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:32.586 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.586 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.586 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:32.586 13:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.586 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.586 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:32.586 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.845 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.845 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:32.845 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:32.845 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:32.845 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:33.103 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:33.103 "name": "BaseBdev2", 00:18:33.103 "aliases": [ 00:18:33.103 "86671356-f1a1-438d-b8b3-9c105964628e" 00:18:33.103 ], 00:18:33.103 "product_name": "Malloc disk", 00:18:33.103 "block_size": 512, 00:18:33.103 "num_blocks": 65536, 00:18:33.103 "uuid": "86671356-f1a1-438d-b8b3-9c105964628e", 00:18:33.103 "assigned_rate_limits": { 00:18:33.103 "rw_ios_per_sec": 0, 00:18:33.103 "rw_mbytes_per_sec": 0, 00:18:33.103 "r_mbytes_per_sec": 0, 00:18:33.103 "w_mbytes_per_sec": 0 00:18:33.103 }, 00:18:33.103 "claimed": true, 00:18:33.103 "claim_type": "exclusive_write", 00:18:33.103 "zoned": false, 00:18:33.103 "supported_io_types": { 00:18:33.103 "read": true, 00:18:33.103 "write": true, 00:18:33.103 "unmap": true, 00:18:33.103 "flush": true, 00:18:33.103 "reset": true, 00:18:33.103 "nvme_admin": false, 00:18:33.103 "nvme_io": false, 00:18:33.103 "nvme_io_md": false, 00:18:33.103 "write_zeroes": true, 00:18:33.103 "zcopy": true, 00:18:33.103 "get_zone_info": false, 00:18:33.103 "zone_management": false, 00:18:33.103 "zone_append": false, 00:18:33.103 "compare": false, 00:18:33.103 "compare_and_write": false, 00:18:33.103 "abort": true, 00:18:33.103 "seek_hole": false, 00:18:33.103 "seek_data": false, 00:18:33.103 "copy": true, 00:18:33.103 "nvme_iov_md": false 00:18:33.103 }, 00:18:33.103 "memory_domains": [ 00:18:33.103 { 00:18:33.103 "dma_device_id": "system", 00:18:33.103 "dma_device_type": 1 00:18:33.103 }, 00:18:33.103 { 00:18:33.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.103 "dma_device_type": 2 00:18:33.103 } 00:18:33.103 ], 00:18:33.103 "driver_specific": {} 00:18:33.103 }' 00:18:33.103 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:33.103 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:33.103 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:33.103 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:33.103 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:33.103 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:33.103 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:33.103 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:33.361 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:33.361 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:33.361 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:33.361 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:33.361 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:33.361 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:33.362 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:33.620 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:33.620 "name": "BaseBdev3", 00:18:33.620 "aliases": [ 00:18:33.620 "fdc9f5cd-669b-49e8-b22c-1f6d54cb1f39" 00:18:33.620 ], 00:18:33.620 "product_name": "Malloc disk", 00:18:33.620 "block_size": 512, 00:18:33.620 "num_blocks": 65536, 00:18:33.620 "uuid": "fdc9f5cd-669b-49e8-b22c-1f6d54cb1f39", 00:18:33.620 "assigned_rate_limits": { 00:18:33.620 "rw_ios_per_sec": 0, 00:18:33.620 "rw_mbytes_per_sec": 0, 00:18:33.620 "r_mbytes_per_sec": 0, 00:18:33.620 "w_mbytes_per_sec": 0 00:18:33.620 }, 00:18:33.620 "claimed": true, 00:18:33.620 "claim_type": "exclusive_write", 00:18:33.620 "zoned": false, 00:18:33.620 "supported_io_types": { 00:18:33.620 "read": true, 00:18:33.620 "write": true, 00:18:33.620 "unmap": true, 00:18:33.620 "flush": true, 00:18:33.620 "reset": true, 00:18:33.620 "nvme_admin": false, 00:18:33.620 "nvme_io": false, 00:18:33.620 "nvme_io_md": false, 00:18:33.620 "write_zeroes": true, 00:18:33.620 "zcopy": true, 00:18:33.620 "get_zone_info": false, 00:18:33.620 "zone_management": false, 00:18:33.620 "zone_append": false, 00:18:33.620 "compare": false, 00:18:33.620 "compare_and_write": false, 00:18:33.620 "abort": true, 00:18:33.620 "seek_hole": false, 00:18:33.620 "seek_data": false, 00:18:33.620 "copy": true, 00:18:33.620 "nvme_iov_md": false 00:18:33.620 }, 00:18:33.620 "memory_domains": [ 00:18:33.620 { 00:18:33.620 "dma_device_id": "system", 00:18:33.620 "dma_device_type": 1 00:18:33.620 }, 00:18:33.620 { 00:18:33.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.620 "dma_device_type": 2 00:18:33.620 } 00:18:33.620 ], 00:18:33.620 "driver_specific": {} 00:18:33.620 }' 00:18:33.620 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:33.620 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:33.620 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:33.620 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:33.620 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:33.620 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:33.620 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:33.620 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:33.881 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:33.881 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:33.881 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:33.881 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:33.881 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:33.881 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:33.881 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:34.140 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:34.140 "name": "BaseBdev4", 00:18:34.140 "aliases": [ 00:18:34.140 "425d4648-bdc4-4e99-912d-b2062a3632c0" 00:18:34.140 ], 00:18:34.140 "product_name": "Malloc disk", 00:18:34.140 "block_size": 512, 00:18:34.140 "num_blocks": 65536, 00:18:34.140 "uuid": "425d4648-bdc4-4e99-912d-b2062a3632c0", 00:18:34.140 "assigned_rate_limits": { 00:18:34.140 "rw_ios_per_sec": 0, 00:18:34.140 "rw_mbytes_per_sec": 0, 00:18:34.140 "r_mbytes_per_sec": 0, 00:18:34.140 "w_mbytes_per_sec": 0 00:18:34.140 }, 00:18:34.140 "claimed": true, 00:18:34.140 "claim_type": "exclusive_write", 00:18:34.140 "zoned": false, 00:18:34.140 "supported_io_types": { 00:18:34.140 "read": true, 00:18:34.140 "write": true, 00:18:34.140 "unmap": true, 00:18:34.140 "flush": true, 00:18:34.140 "reset": true, 00:18:34.140 "nvme_admin": false, 00:18:34.140 "nvme_io": false, 00:18:34.140 "nvme_io_md": false, 00:18:34.140 "write_zeroes": true, 00:18:34.140 "zcopy": true, 00:18:34.140 "get_zone_info": false, 00:18:34.140 "zone_management": false, 00:18:34.140 "zone_append": false, 00:18:34.140 "compare": false, 00:18:34.140 "compare_and_write": false, 00:18:34.140 "abort": true, 00:18:34.140 "seek_hole": false, 00:18:34.140 "seek_data": false, 00:18:34.140 "copy": true, 00:18:34.140 "nvme_iov_md": false 00:18:34.140 }, 00:18:34.140 "memory_domains": [ 00:18:34.140 { 00:18:34.140 "dma_device_id": "system", 00:18:34.140 "dma_device_type": 1 00:18:34.140 }, 00:18:34.140 { 00:18:34.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.140 "dma_device_type": 2 00:18:34.140 } 00:18:34.140 ], 00:18:34.140 "driver_specific": {} 00:18:34.140 }' 00:18:34.140 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:34.140 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:34.140 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:34.140 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:34.140 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:34.140 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:34.140 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:34.399 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:34.399 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:34.399 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:34.399 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:34.399 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:34.399 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:34.659 [2024-07-25 13:18:44.932872] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:34.659 [2024-07-25 13:18:44.932895] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.659 [2024-07-25 13:18:44.932936] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.659 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.918 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.918 "name": "Existed_Raid", 00:18:34.918 "uuid": "c4229ed3-021f-4264-9c61-b88d713e0163", 00:18:34.918 "strip_size_kb": 64, 00:18:34.918 "state": "offline", 00:18:34.918 "raid_level": "raid0", 00:18:34.918 "superblock": true, 00:18:34.918 "num_base_bdevs": 4, 00:18:34.918 "num_base_bdevs_discovered": 3, 00:18:34.918 "num_base_bdevs_operational": 3, 00:18:34.918 "base_bdevs_list": [ 00:18:34.918 { 00:18:34.918 "name": null, 00:18:34.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.918 "is_configured": false, 00:18:34.918 "data_offset": 2048, 00:18:34.918 "data_size": 63488 00:18:34.918 }, 00:18:34.918 { 00:18:34.918 "name": "BaseBdev2", 00:18:34.918 "uuid": "86671356-f1a1-438d-b8b3-9c105964628e", 00:18:34.918 "is_configured": true, 00:18:34.918 "data_offset": 2048, 00:18:34.918 "data_size": 63488 00:18:34.918 }, 00:18:34.918 { 00:18:34.918 "name": "BaseBdev3", 00:18:34.918 "uuid": "fdc9f5cd-669b-49e8-b22c-1f6d54cb1f39", 00:18:34.918 "is_configured": true, 00:18:34.918 "data_offset": 2048, 00:18:34.918 "data_size": 63488 00:18:34.918 }, 00:18:34.918 { 00:18:34.918 "name": "BaseBdev4", 00:18:34.918 "uuid": "425d4648-bdc4-4e99-912d-b2062a3632c0", 00:18:34.918 "is_configured": true, 00:18:34.918 "data_offset": 2048, 00:18:34.918 "data_size": 63488 00:18:34.918 } 00:18:34.918 ] 00:18:34.918 }' 00:18:34.918 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.918 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.486 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:35.486 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:35.486 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.486 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:35.745 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:35.745 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:35.745 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:36.004 [2024-07-25 13:18:46.457842] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:36.264 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:36.264 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:36.264 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.264 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:36.264 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:36.264 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:36.264 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:36.523 [2024-07-25 13:18:46.852853] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:36.523 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:36.523 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:36.523 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.523 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:36.782 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:36.782 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:36.782 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:37.042 [2024-07-25 13:18:47.291876] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:37.042 [2024-07-25 13:18:47.291912] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1456840 name Existed_Raid, state offline 00:18:37.042 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:37.042 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:37.042 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.042 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:37.669 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:37.669 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:37.669 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:18:37.669 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:37.669 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:37.669 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:37.669 BaseBdev2 00:18:37.669 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:37.669 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:37.669 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:37.669 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:37.670 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:37.670 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:37.670 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:37.928 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:38.187 [ 00:18:38.187 { 00:18:38.187 "name": "BaseBdev2", 00:18:38.187 "aliases": [ 00:18:38.187 "b511884a-37b4-429c-bff8-c0dd37c61c5e" 00:18:38.187 ], 00:18:38.187 "product_name": "Malloc disk", 00:18:38.187 "block_size": 512, 00:18:38.187 "num_blocks": 65536, 00:18:38.187 "uuid": "b511884a-37b4-429c-bff8-c0dd37c61c5e", 00:18:38.187 "assigned_rate_limits": { 00:18:38.187 "rw_ios_per_sec": 0, 00:18:38.187 "rw_mbytes_per_sec": 0, 00:18:38.187 "r_mbytes_per_sec": 0, 00:18:38.187 "w_mbytes_per_sec": 0 00:18:38.187 }, 00:18:38.187 "claimed": false, 00:18:38.187 "zoned": false, 00:18:38.187 "supported_io_types": { 00:18:38.187 "read": true, 00:18:38.187 "write": true, 00:18:38.187 "unmap": true, 00:18:38.187 "flush": true, 00:18:38.187 "reset": true, 00:18:38.187 "nvme_admin": false, 00:18:38.187 "nvme_io": false, 00:18:38.187 "nvme_io_md": false, 00:18:38.188 "write_zeroes": true, 00:18:38.188 "zcopy": true, 00:18:38.188 "get_zone_info": false, 00:18:38.188 "zone_management": false, 00:18:38.188 "zone_append": false, 00:18:38.188 "compare": false, 00:18:38.188 "compare_and_write": false, 00:18:38.188 "abort": true, 00:18:38.188 "seek_hole": false, 00:18:38.188 "seek_data": false, 00:18:38.188 "copy": true, 00:18:38.188 "nvme_iov_md": false 00:18:38.188 }, 00:18:38.188 "memory_domains": [ 00:18:38.188 { 00:18:38.188 "dma_device_id": "system", 00:18:38.188 "dma_device_type": 1 00:18:38.188 }, 00:18:38.188 { 00:18:38.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.188 "dma_device_type": 2 00:18:38.188 } 00:18:38.188 ], 00:18:38.188 "driver_specific": {} 00:18:38.188 } 00:18:38.188 ] 00:18:38.188 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:38.188 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:38.188 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:38.188 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:38.447 BaseBdev3 00:18:38.447 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:38.447 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:38.447 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:38.447 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:38.447 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:38.447 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:38.447 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:38.447 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:38.707 [ 00:18:38.707 { 00:18:38.707 "name": "BaseBdev3", 00:18:38.707 "aliases": [ 00:18:38.707 "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489" 00:18:38.707 ], 00:18:38.707 "product_name": "Malloc disk", 00:18:38.707 "block_size": 512, 00:18:38.707 "num_blocks": 65536, 00:18:38.707 "uuid": "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489", 00:18:38.707 "assigned_rate_limits": { 00:18:38.707 "rw_ios_per_sec": 0, 00:18:38.707 "rw_mbytes_per_sec": 0, 00:18:38.707 "r_mbytes_per_sec": 0, 00:18:38.707 "w_mbytes_per_sec": 0 00:18:38.707 }, 00:18:38.707 "claimed": false, 00:18:38.707 "zoned": false, 00:18:38.707 "supported_io_types": { 00:18:38.707 "read": true, 00:18:38.707 "write": true, 00:18:38.707 "unmap": true, 00:18:38.707 "flush": true, 00:18:38.707 "reset": true, 00:18:38.707 "nvme_admin": false, 00:18:38.707 "nvme_io": false, 00:18:38.707 "nvme_io_md": false, 00:18:38.707 "write_zeroes": true, 00:18:38.707 "zcopy": true, 00:18:38.707 "get_zone_info": false, 00:18:38.707 "zone_management": false, 00:18:38.707 "zone_append": false, 00:18:38.707 "compare": false, 00:18:38.707 "compare_and_write": false, 00:18:38.707 "abort": true, 00:18:38.707 "seek_hole": false, 00:18:38.707 "seek_data": false, 00:18:38.707 "copy": true, 00:18:38.707 "nvme_iov_md": false 00:18:38.707 }, 00:18:38.707 "memory_domains": [ 00:18:38.707 { 00:18:38.707 "dma_device_id": "system", 00:18:38.707 "dma_device_type": 1 00:18:38.707 }, 00:18:38.707 { 00:18:38.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.707 "dma_device_type": 2 00:18:38.707 } 00:18:38.707 ], 00:18:38.707 "driver_specific": {} 00:18:38.707 } 00:18:38.707 ] 00:18:38.707 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:38.707 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:38.707 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:38.707 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:38.966 BaseBdev4 00:18:38.966 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:18:38.966 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:38.966 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:38.966 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:38.966 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:38.966 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:38.966 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:39.225 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:39.485 [ 00:18:39.485 { 00:18:39.485 "name": "BaseBdev4", 00:18:39.485 "aliases": [ 00:18:39.485 "f5d5a3d1-93dd-4761-bd1a-880a45336f18" 00:18:39.485 ], 00:18:39.485 "product_name": "Malloc disk", 00:18:39.485 "block_size": 512, 00:18:39.485 "num_blocks": 65536, 00:18:39.485 "uuid": "f5d5a3d1-93dd-4761-bd1a-880a45336f18", 00:18:39.485 "assigned_rate_limits": { 00:18:39.485 "rw_ios_per_sec": 0, 00:18:39.485 "rw_mbytes_per_sec": 0, 00:18:39.485 "r_mbytes_per_sec": 0, 00:18:39.485 "w_mbytes_per_sec": 0 00:18:39.485 }, 00:18:39.485 "claimed": false, 00:18:39.485 "zoned": false, 00:18:39.485 "supported_io_types": { 00:18:39.485 "read": true, 00:18:39.485 "write": true, 00:18:39.485 "unmap": true, 00:18:39.485 "flush": true, 00:18:39.485 "reset": true, 00:18:39.485 "nvme_admin": false, 00:18:39.485 "nvme_io": false, 00:18:39.485 "nvme_io_md": false, 00:18:39.485 "write_zeroes": true, 00:18:39.485 "zcopy": true, 00:18:39.485 "get_zone_info": false, 00:18:39.485 "zone_management": false, 00:18:39.485 "zone_append": false, 00:18:39.485 "compare": false, 00:18:39.485 "compare_and_write": false, 00:18:39.485 "abort": true, 00:18:39.485 "seek_hole": false, 00:18:39.485 "seek_data": false, 00:18:39.485 "copy": true, 00:18:39.485 "nvme_iov_md": false 00:18:39.485 }, 00:18:39.485 "memory_domains": [ 00:18:39.485 { 00:18:39.485 "dma_device_id": "system", 00:18:39.485 "dma_device_type": 1 00:18:39.485 }, 00:18:39.485 { 00:18:39.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.485 "dma_device_type": 2 00:18:39.485 } 00:18:39.485 ], 00:18:39.485 "driver_specific": {} 00:18:39.485 } 00:18:39.485 ] 00:18:39.485 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:39.485 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:39.485 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:39.485 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:39.744 [2024-07-25 13:18:49.977327] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:39.744 [2024-07-25 13:18:49.977362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:39.744 [2024-07-25 13:18:49.977379] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.744 [2024-07-25 13:18:49.978599] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:39.744 [2024-07-25 13:18:49.978636] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:39.744 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:39.744 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:39.744 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:39.744 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:39.744 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:39.744 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:39.744 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:39.744 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:39.744 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:39.744 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:39.744 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.744 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.312 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.312 "name": "Existed_Raid", 00:18:40.312 "uuid": "f0962625-8747-44e7-8da1-e148c92b5382", 00:18:40.312 "strip_size_kb": 64, 00:18:40.312 "state": "configuring", 00:18:40.312 "raid_level": "raid0", 00:18:40.312 "superblock": true, 00:18:40.312 "num_base_bdevs": 4, 00:18:40.312 "num_base_bdevs_discovered": 3, 00:18:40.312 "num_base_bdevs_operational": 4, 00:18:40.312 "base_bdevs_list": [ 00:18:40.312 { 00:18:40.312 "name": "BaseBdev1", 00:18:40.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.312 "is_configured": false, 00:18:40.312 "data_offset": 0, 00:18:40.312 "data_size": 0 00:18:40.312 }, 00:18:40.312 { 00:18:40.312 "name": "BaseBdev2", 00:18:40.312 "uuid": "b511884a-37b4-429c-bff8-c0dd37c61c5e", 00:18:40.312 "is_configured": true, 00:18:40.312 "data_offset": 2048, 00:18:40.312 "data_size": 63488 00:18:40.312 }, 00:18:40.312 { 00:18:40.312 "name": "BaseBdev3", 00:18:40.312 "uuid": "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489", 00:18:40.312 "is_configured": true, 00:18:40.312 "data_offset": 2048, 00:18:40.312 "data_size": 63488 00:18:40.312 }, 00:18:40.312 { 00:18:40.312 "name": "BaseBdev4", 00:18:40.312 "uuid": "f5d5a3d1-93dd-4761-bd1a-880a45336f18", 00:18:40.312 "is_configured": true, 00:18:40.312 "data_offset": 2048, 00:18:40.312 "data_size": 63488 00:18:40.312 } 00:18:40.312 ] 00:18:40.312 }' 00:18:40.312 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.312 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:40.878 [2024-07-25 13:18:51.304819] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.878 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.138 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.138 "name": "Existed_Raid", 00:18:41.138 "uuid": "f0962625-8747-44e7-8da1-e148c92b5382", 00:18:41.138 "strip_size_kb": 64, 00:18:41.138 "state": "configuring", 00:18:41.138 "raid_level": "raid0", 00:18:41.138 "superblock": true, 00:18:41.138 "num_base_bdevs": 4, 00:18:41.138 "num_base_bdevs_discovered": 2, 00:18:41.138 "num_base_bdevs_operational": 4, 00:18:41.138 "base_bdevs_list": [ 00:18:41.138 { 00:18:41.138 "name": "BaseBdev1", 00:18:41.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.138 "is_configured": false, 00:18:41.138 "data_offset": 0, 00:18:41.138 "data_size": 0 00:18:41.138 }, 00:18:41.138 { 00:18:41.138 "name": null, 00:18:41.138 "uuid": "b511884a-37b4-429c-bff8-c0dd37c61c5e", 00:18:41.138 "is_configured": false, 00:18:41.138 "data_offset": 2048, 00:18:41.138 "data_size": 63488 00:18:41.138 }, 00:18:41.138 { 00:18:41.138 "name": "BaseBdev3", 00:18:41.138 "uuid": "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489", 00:18:41.138 "is_configured": true, 00:18:41.138 "data_offset": 2048, 00:18:41.138 "data_size": 63488 00:18:41.138 }, 00:18:41.138 { 00:18:41.138 "name": "BaseBdev4", 00:18:41.138 "uuid": "f5d5a3d1-93dd-4761-bd1a-880a45336f18", 00:18:41.138 "is_configured": true, 00:18:41.138 "data_offset": 2048, 00:18:41.138 "data_size": 63488 00:18:41.138 } 00:18:41.138 ] 00:18:41.138 }' 00:18:41.138 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.138 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.706 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.706 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:41.965 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:41.965 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:42.225 [2024-07-25 13:18:52.535282] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:42.225 BaseBdev1 00:18:42.225 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:42.225 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:42.225 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:42.225 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:42.225 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:42.225 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:42.225 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.484 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:42.484 [ 00:18:42.484 { 00:18:42.484 "name": "BaseBdev1", 00:18:42.484 "aliases": [ 00:18:42.484 "f3b878ca-293d-4b98-b93a-25b810e0baef" 00:18:42.484 ], 00:18:42.484 "product_name": "Malloc disk", 00:18:42.484 "block_size": 512, 00:18:42.484 "num_blocks": 65536, 00:18:42.484 "uuid": "f3b878ca-293d-4b98-b93a-25b810e0baef", 00:18:42.484 "assigned_rate_limits": { 00:18:42.484 "rw_ios_per_sec": 0, 00:18:42.484 "rw_mbytes_per_sec": 0, 00:18:42.484 "r_mbytes_per_sec": 0, 00:18:42.484 "w_mbytes_per_sec": 0 00:18:42.484 }, 00:18:42.484 "claimed": true, 00:18:42.484 "claim_type": "exclusive_write", 00:18:42.484 "zoned": false, 00:18:42.484 "supported_io_types": { 00:18:42.484 "read": true, 00:18:42.484 "write": true, 00:18:42.484 "unmap": true, 00:18:42.484 "flush": true, 00:18:42.484 "reset": true, 00:18:42.484 "nvme_admin": false, 00:18:42.484 "nvme_io": false, 00:18:42.484 "nvme_io_md": false, 00:18:42.484 "write_zeroes": true, 00:18:42.484 "zcopy": true, 00:18:42.484 "get_zone_info": false, 00:18:42.484 "zone_management": false, 00:18:42.484 "zone_append": false, 00:18:42.484 "compare": false, 00:18:42.484 "compare_and_write": false, 00:18:42.484 "abort": true, 00:18:42.484 "seek_hole": false, 00:18:42.484 "seek_data": false, 00:18:42.484 "copy": true, 00:18:42.484 "nvme_iov_md": false 00:18:42.484 }, 00:18:42.484 "memory_domains": [ 00:18:42.484 { 00:18:42.484 "dma_device_id": "system", 00:18:42.484 "dma_device_type": 1 00:18:42.484 }, 00:18:42.484 { 00:18:42.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.484 "dma_device_type": 2 00:18:42.484 } 00:18:42.484 ], 00:18:42.484 "driver_specific": {} 00:18:42.484 } 00:18:42.484 ] 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.743 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.743 13:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.743 "name": "Existed_Raid", 00:18:42.743 "uuid": "f0962625-8747-44e7-8da1-e148c92b5382", 00:18:42.743 "strip_size_kb": 64, 00:18:42.743 "state": "configuring", 00:18:42.743 "raid_level": "raid0", 00:18:42.743 "superblock": true, 00:18:42.743 "num_base_bdevs": 4, 00:18:42.743 "num_base_bdevs_discovered": 3, 00:18:42.743 "num_base_bdevs_operational": 4, 00:18:42.743 "base_bdevs_list": [ 00:18:42.743 { 00:18:42.743 "name": "BaseBdev1", 00:18:42.743 "uuid": "f3b878ca-293d-4b98-b93a-25b810e0baef", 00:18:42.743 "is_configured": true, 00:18:42.743 "data_offset": 2048, 00:18:42.743 "data_size": 63488 00:18:42.743 }, 00:18:42.743 { 00:18:42.743 "name": null, 00:18:42.743 "uuid": "b511884a-37b4-429c-bff8-c0dd37c61c5e", 00:18:42.743 "is_configured": false, 00:18:42.743 "data_offset": 2048, 00:18:42.743 "data_size": 63488 00:18:42.743 }, 00:18:42.743 { 00:18:42.743 "name": "BaseBdev3", 00:18:42.743 "uuid": "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489", 00:18:42.743 "is_configured": true, 00:18:42.743 "data_offset": 2048, 00:18:42.743 "data_size": 63488 00:18:42.743 }, 00:18:42.743 { 00:18:42.743 "name": "BaseBdev4", 00:18:42.743 "uuid": "f5d5a3d1-93dd-4761-bd1a-880a45336f18", 00:18:42.743 "is_configured": true, 00:18:42.743 "data_offset": 2048, 00:18:42.743 "data_size": 63488 00:18:42.743 } 00:18:42.743 ] 00:18:42.743 }' 00:18:42.743 13:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.743 13:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.310 13:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.310 13:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:43.878 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:43.878 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:44.137 [2024-07-25 13:18:54.428331] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.137 "name": "Existed_Raid", 00:18:44.137 "uuid": "f0962625-8747-44e7-8da1-e148c92b5382", 00:18:44.137 "strip_size_kb": 64, 00:18:44.137 "state": "configuring", 00:18:44.137 "raid_level": "raid0", 00:18:44.137 "superblock": true, 00:18:44.137 "num_base_bdevs": 4, 00:18:44.137 "num_base_bdevs_discovered": 2, 00:18:44.137 "num_base_bdevs_operational": 4, 00:18:44.137 "base_bdevs_list": [ 00:18:44.137 { 00:18:44.137 "name": "BaseBdev1", 00:18:44.137 "uuid": "f3b878ca-293d-4b98-b93a-25b810e0baef", 00:18:44.137 "is_configured": true, 00:18:44.137 "data_offset": 2048, 00:18:44.137 "data_size": 63488 00:18:44.137 }, 00:18:44.137 { 00:18:44.137 "name": null, 00:18:44.137 "uuid": "b511884a-37b4-429c-bff8-c0dd37c61c5e", 00:18:44.137 "is_configured": false, 00:18:44.137 "data_offset": 2048, 00:18:44.137 "data_size": 63488 00:18:44.137 }, 00:18:44.137 { 00:18:44.137 "name": null, 00:18:44.137 "uuid": "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489", 00:18:44.137 "is_configured": false, 00:18:44.137 "data_offset": 2048, 00:18:44.137 "data_size": 63488 00:18:44.137 }, 00:18:44.137 { 00:18:44.137 "name": "BaseBdev4", 00:18:44.137 "uuid": "f5d5a3d1-93dd-4761-bd1a-880a45336f18", 00:18:44.137 "is_configured": true, 00:18:44.137 "data_offset": 2048, 00:18:44.137 "data_size": 63488 00:18:44.137 } 00:18:44.137 ] 00:18:44.137 }' 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.137 13:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.074 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.074 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:45.074 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:45.074 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:45.333 [2024-07-25 13:18:55.631531] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:45.333 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:45.333 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:45.333 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:45.333 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:45.333 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:45.333 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:45.333 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.333 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.333 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.333 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.333 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.333 13:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.900 13:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.901 "name": "Existed_Raid", 00:18:45.901 "uuid": "f0962625-8747-44e7-8da1-e148c92b5382", 00:18:45.901 "strip_size_kb": 64, 00:18:45.901 "state": "configuring", 00:18:45.901 "raid_level": "raid0", 00:18:45.901 "superblock": true, 00:18:45.901 "num_base_bdevs": 4, 00:18:45.901 "num_base_bdevs_discovered": 3, 00:18:45.901 "num_base_bdevs_operational": 4, 00:18:45.901 "base_bdevs_list": [ 00:18:45.901 { 00:18:45.901 "name": "BaseBdev1", 00:18:45.901 "uuid": "f3b878ca-293d-4b98-b93a-25b810e0baef", 00:18:45.901 "is_configured": true, 00:18:45.901 "data_offset": 2048, 00:18:45.901 "data_size": 63488 00:18:45.901 }, 00:18:45.901 { 00:18:45.901 "name": null, 00:18:45.901 "uuid": "b511884a-37b4-429c-bff8-c0dd37c61c5e", 00:18:45.901 "is_configured": false, 00:18:45.901 "data_offset": 2048, 00:18:45.901 "data_size": 63488 00:18:45.901 }, 00:18:45.901 { 00:18:45.901 "name": "BaseBdev3", 00:18:45.901 "uuid": "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489", 00:18:45.901 "is_configured": true, 00:18:45.901 "data_offset": 2048, 00:18:45.901 "data_size": 63488 00:18:45.901 }, 00:18:45.901 { 00:18:45.901 "name": "BaseBdev4", 00:18:45.901 "uuid": "f5d5a3d1-93dd-4761-bd1a-880a45336f18", 00:18:45.901 "is_configured": true, 00:18:45.901 "data_offset": 2048, 00:18:45.901 "data_size": 63488 00:18:45.901 } 00:18:45.901 ] 00:18:45.901 }' 00:18:45.901 13:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.901 13:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.468 13:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.468 13:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:46.468 13:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:46.468 13:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:46.727 [2024-07-25 13:18:57.103438] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:46.728 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:46.728 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:46.728 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:46.728 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:46.728 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:46.728 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:46.728 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:46.728 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:46.728 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:46.728 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:46.728 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.728 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.987 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:46.987 "name": "Existed_Raid", 00:18:46.987 "uuid": "f0962625-8747-44e7-8da1-e148c92b5382", 00:18:46.987 "strip_size_kb": 64, 00:18:46.987 "state": "configuring", 00:18:46.987 "raid_level": "raid0", 00:18:46.987 "superblock": true, 00:18:46.987 "num_base_bdevs": 4, 00:18:46.987 "num_base_bdevs_discovered": 2, 00:18:46.987 "num_base_bdevs_operational": 4, 00:18:46.987 "base_bdevs_list": [ 00:18:46.987 { 00:18:46.987 "name": null, 00:18:46.987 "uuid": "f3b878ca-293d-4b98-b93a-25b810e0baef", 00:18:46.987 "is_configured": false, 00:18:46.987 "data_offset": 2048, 00:18:46.987 "data_size": 63488 00:18:46.987 }, 00:18:46.987 { 00:18:46.987 "name": null, 00:18:46.987 "uuid": "b511884a-37b4-429c-bff8-c0dd37c61c5e", 00:18:46.987 "is_configured": false, 00:18:46.987 "data_offset": 2048, 00:18:46.987 "data_size": 63488 00:18:46.987 }, 00:18:46.987 { 00:18:46.987 "name": "BaseBdev3", 00:18:46.987 "uuid": "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489", 00:18:46.987 "is_configured": true, 00:18:46.987 "data_offset": 2048, 00:18:46.987 "data_size": 63488 00:18:46.987 }, 00:18:46.987 { 00:18:46.987 "name": "BaseBdev4", 00:18:46.987 "uuid": "f5d5a3d1-93dd-4761-bd1a-880a45336f18", 00:18:46.987 "is_configured": true, 00:18:46.987 "data_offset": 2048, 00:18:46.987 "data_size": 63488 00:18:46.987 } 00:18:46.987 ] 00:18:46.987 }' 00:18:46.987 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:46.987 13:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.555 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.555 13:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:47.814 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:47.814 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:48.073 [2024-07-25 13:18:58.316291] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:48.073 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:48.073 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:48.073 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:48.073 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:48.073 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:48.073 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:48.073 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:48.073 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:48.073 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:48.073 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:48.073 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.073 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.333 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:48.333 "name": "Existed_Raid", 00:18:48.333 "uuid": "f0962625-8747-44e7-8da1-e148c92b5382", 00:18:48.333 "strip_size_kb": 64, 00:18:48.333 "state": "configuring", 00:18:48.333 "raid_level": "raid0", 00:18:48.333 "superblock": true, 00:18:48.333 "num_base_bdevs": 4, 00:18:48.333 "num_base_bdevs_discovered": 3, 00:18:48.333 "num_base_bdevs_operational": 4, 00:18:48.333 "base_bdevs_list": [ 00:18:48.333 { 00:18:48.333 "name": null, 00:18:48.333 "uuid": "f3b878ca-293d-4b98-b93a-25b810e0baef", 00:18:48.333 "is_configured": false, 00:18:48.333 "data_offset": 2048, 00:18:48.333 "data_size": 63488 00:18:48.333 }, 00:18:48.333 { 00:18:48.333 "name": "BaseBdev2", 00:18:48.333 "uuid": "b511884a-37b4-429c-bff8-c0dd37c61c5e", 00:18:48.333 "is_configured": true, 00:18:48.333 "data_offset": 2048, 00:18:48.333 "data_size": 63488 00:18:48.333 }, 00:18:48.333 { 00:18:48.333 "name": "BaseBdev3", 00:18:48.333 "uuid": "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489", 00:18:48.333 "is_configured": true, 00:18:48.333 "data_offset": 2048, 00:18:48.333 "data_size": 63488 00:18:48.333 }, 00:18:48.333 { 00:18:48.333 "name": "BaseBdev4", 00:18:48.333 "uuid": "f5d5a3d1-93dd-4761-bd1a-880a45336f18", 00:18:48.333 "is_configured": true, 00:18:48.333 "data_offset": 2048, 00:18:48.333 "data_size": 63488 00:18:48.333 } 00:18:48.333 ] 00:18:48.333 }' 00:18:48.333 13:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:48.333 13:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.901 13:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.901 13:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:48.901 13:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:48.901 13:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:48.901 13:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.160 13:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f3b878ca-293d-4b98-b93a-25b810e0baef 00:18:49.419 [2024-07-25 13:18:59.783262] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:49.419 [2024-07-25 13:18:59.783400] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1455360 00:18:49.419 [2024-07-25 13:18:59.783412] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:49.419 [2024-07-25 13:18:59.783570] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15fa070 00:18:49.419 [2024-07-25 13:18:59.783675] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1455360 00:18:49.419 [2024-07-25 13:18:59.783684] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1455360 00:18:49.419 [2024-07-25 13:18:59.783764] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.419 NewBaseBdev 00:18:49.419 13:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:49.419 13:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:49.419 13:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:49.419 13:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:49.419 13:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:49.419 13:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:49.419 13:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.678 13:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:49.937 [ 00:18:49.937 { 00:18:49.937 "name": "NewBaseBdev", 00:18:49.937 "aliases": [ 00:18:49.937 "f3b878ca-293d-4b98-b93a-25b810e0baef" 00:18:49.937 ], 00:18:49.937 "product_name": "Malloc disk", 00:18:49.937 "block_size": 512, 00:18:49.937 "num_blocks": 65536, 00:18:49.937 "uuid": "f3b878ca-293d-4b98-b93a-25b810e0baef", 00:18:49.937 "assigned_rate_limits": { 00:18:49.937 "rw_ios_per_sec": 0, 00:18:49.937 "rw_mbytes_per_sec": 0, 00:18:49.937 "r_mbytes_per_sec": 0, 00:18:49.937 "w_mbytes_per_sec": 0 00:18:49.938 }, 00:18:49.938 "claimed": true, 00:18:49.938 "claim_type": "exclusive_write", 00:18:49.938 "zoned": false, 00:18:49.938 "supported_io_types": { 00:18:49.938 "read": true, 00:18:49.938 "write": true, 00:18:49.938 "unmap": true, 00:18:49.938 "flush": true, 00:18:49.938 "reset": true, 00:18:49.938 "nvme_admin": false, 00:18:49.938 "nvme_io": false, 00:18:49.938 "nvme_io_md": false, 00:18:49.938 "write_zeroes": true, 00:18:49.938 "zcopy": true, 00:18:49.938 "get_zone_info": false, 00:18:49.938 "zone_management": false, 00:18:49.938 "zone_append": false, 00:18:49.938 "compare": false, 00:18:49.938 "compare_and_write": false, 00:18:49.938 "abort": true, 00:18:49.938 "seek_hole": false, 00:18:49.938 "seek_data": false, 00:18:49.938 "copy": true, 00:18:49.938 "nvme_iov_md": false 00:18:49.938 }, 00:18:49.938 "memory_domains": [ 00:18:49.938 { 00:18:49.938 "dma_device_id": "system", 00:18:49.938 "dma_device_type": 1 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.938 "dma_device_type": 2 00:18:49.938 } 00:18:49.938 ], 00:18:49.938 "driver_specific": {} 00:18:49.938 } 00:18:49.938 ] 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.938 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.197 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:50.197 "name": "Existed_Raid", 00:18:50.197 "uuid": "f0962625-8747-44e7-8da1-e148c92b5382", 00:18:50.197 "strip_size_kb": 64, 00:18:50.197 "state": "online", 00:18:50.197 "raid_level": "raid0", 00:18:50.197 "superblock": true, 00:18:50.197 "num_base_bdevs": 4, 00:18:50.197 "num_base_bdevs_discovered": 4, 00:18:50.197 "num_base_bdevs_operational": 4, 00:18:50.197 "base_bdevs_list": [ 00:18:50.197 { 00:18:50.197 "name": "NewBaseBdev", 00:18:50.197 "uuid": "f3b878ca-293d-4b98-b93a-25b810e0baef", 00:18:50.197 "is_configured": true, 00:18:50.197 "data_offset": 2048, 00:18:50.197 "data_size": 63488 00:18:50.197 }, 00:18:50.197 { 00:18:50.197 "name": "BaseBdev2", 00:18:50.197 "uuid": "b511884a-37b4-429c-bff8-c0dd37c61c5e", 00:18:50.197 "is_configured": true, 00:18:50.197 "data_offset": 2048, 00:18:50.197 "data_size": 63488 00:18:50.197 }, 00:18:50.197 { 00:18:50.197 "name": "BaseBdev3", 00:18:50.197 "uuid": "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489", 00:18:50.197 "is_configured": true, 00:18:50.197 "data_offset": 2048, 00:18:50.197 "data_size": 63488 00:18:50.197 }, 00:18:50.197 { 00:18:50.197 "name": "BaseBdev4", 00:18:50.197 "uuid": "f5d5a3d1-93dd-4761-bd1a-880a45336f18", 00:18:50.197 "is_configured": true, 00:18:50.197 "data_offset": 2048, 00:18:50.197 "data_size": 63488 00:18:50.197 } 00:18:50.197 ] 00:18:50.197 }' 00:18:50.197 13:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:50.197 13:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.767 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:50.767 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:50.767 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:50.767 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:50.767 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:50.767 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:50.767 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:50.767 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:51.095 [2024-07-25 13:19:01.283531] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.095 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:51.095 "name": "Existed_Raid", 00:18:51.095 "aliases": [ 00:18:51.096 "f0962625-8747-44e7-8da1-e148c92b5382" 00:18:51.096 ], 00:18:51.096 "product_name": "Raid Volume", 00:18:51.096 "block_size": 512, 00:18:51.096 "num_blocks": 253952, 00:18:51.096 "uuid": "f0962625-8747-44e7-8da1-e148c92b5382", 00:18:51.096 "assigned_rate_limits": { 00:18:51.096 "rw_ios_per_sec": 0, 00:18:51.096 "rw_mbytes_per_sec": 0, 00:18:51.096 "r_mbytes_per_sec": 0, 00:18:51.096 "w_mbytes_per_sec": 0 00:18:51.096 }, 00:18:51.096 "claimed": false, 00:18:51.096 "zoned": false, 00:18:51.096 "supported_io_types": { 00:18:51.096 "read": true, 00:18:51.096 "write": true, 00:18:51.096 "unmap": true, 00:18:51.096 "flush": true, 00:18:51.096 "reset": true, 00:18:51.096 "nvme_admin": false, 00:18:51.096 "nvme_io": false, 00:18:51.096 "nvme_io_md": false, 00:18:51.096 "write_zeroes": true, 00:18:51.096 "zcopy": false, 00:18:51.096 "get_zone_info": false, 00:18:51.096 "zone_management": false, 00:18:51.096 "zone_append": false, 00:18:51.096 "compare": false, 00:18:51.096 "compare_and_write": false, 00:18:51.096 "abort": false, 00:18:51.096 "seek_hole": false, 00:18:51.096 "seek_data": false, 00:18:51.096 "copy": false, 00:18:51.096 "nvme_iov_md": false 00:18:51.096 }, 00:18:51.096 "memory_domains": [ 00:18:51.096 { 00:18:51.096 "dma_device_id": "system", 00:18:51.096 "dma_device_type": 1 00:18:51.096 }, 00:18:51.096 { 00:18:51.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.096 "dma_device_type": 2 00:18:51.096 }, 00:18:51.096 { 00:18:51.096 "dma_device_id": "system", 00:18:51.096 "dma_device_type": 1 00:18:51.096 }, 00:18:51.096 { 00:18:51.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.096 "dma_device_type": 2 00:18:51.096 }, 00:18:51.096 { 00:18:51.096 "dma_device_id": "system", 00:18:51.096 "dma_device_type": 1 00:18:51.096 }, 00:18:51.096 { 00:18:51.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.096 "dma_device_type": 2 00:18:51.096 }, 00:18:51.096 { 00:18:51.096 "dma_device_id": "system", 00:18:51.096 "dma_device_type": 1 00:18:51.096 }, 00:18:51.096 { 00:18:51.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.096 "dma_device_type": 2 00:18:51.096 } 00:18:51.096 ], 00:18:51.096 "driver_specific": { 00:18:51.096 "raid": { 00:18:51.096 "uuid": "f0962625-8747-44e7-8da1-e148c92b5382", 00:18:51.096 "strip_size_kb": 64, 00:18:51.096 "state": "online", 00:18:51.096 "raid_level": "raid0", 00:18:51.096 "superblock": true, 00:18:51.096 "num_base_bdevs": 4, 00:18:51.096 "num_base_bdevs_discovered": 4, 00:18:51.096 "num_base_bdevs_operational": 4, 00:18:51.096 "base_bdevs_list": [ 00:18:51.096 { 00:18:51.096 "name": "NewBaseBdev", 00:18:51.096 "uuid": "f3b878ca-293d-4b98-b93a-25b810e0baef", 00:18:51.096 "is_configured": true, 00:18:51.096 "data_offset": 2048, 00:18:51.096 "data_size": 63488 00:18:51.096 }, 00:18:51.096 { 00:18:51.096 "name": "BaseBdev2", 00:18:51.096 "uuid": "b511884a-37b4-429c-bff8-c0dd37c61c5e", 00:18:51.096 "is_configured": true, 00:18:51.096 "data_offset": 2048, 00:18:51.096 "data_size": 63488 00:18:51.096 }, 00:18:51.096 { 00:18:51.096 "name": "BaseBdev3", 00:18:51.096 "uuid": "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489", 00:18:51.096 "is_configured": true, 00:18:51.096 "data_offset": 2048, 00:18:51.096 "data_size": 63488 00:18:51.096 }, 00:18:51.096 { 00:18:51.096 "name": "BaseBdev4", 00:18:51.096 "uuid": "f5d5a3d1-93dd-4761-bd1a-880a45336f18", 00:18:51.096 "is_configured": true, 00:18:51.096 "data_offset": 2048, 00:18:51.096 "data_size": 63488 00:18:51.096 } 00:18:51.096 ] 00:18:51.096 } 00:18:51.096 } 00:18:51.096 }' 00:18:51.096 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:51.096 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:51.096 BaseBdev2 00:18:51.096 BaseBdev3 00:18:51.096 BaseBdev4' 00:18:51.096 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:51.096 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:51.096 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:51.096 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.096 "name": "NewBaseBdev", 00:18:51.096 "aliases": [ 00:18:51.096 "f3b878ca-293d-4b98-b93a-25b810e0baef" 00:18:51.096 ], 00:18:51.096 "product_name": "Malloc disk", 00:18:51.096 "block_size": 512, 00:18:51.096 "num_blocks": 65536, 00:18:51.096 "uuid": "f3b878ca-293d-4b98-b93a-25b810e0baef", 00:18:51.096 "assigned_rate_limits": { 00:18:51.096 "rw_ios_per_sec": 0, 00:18:51.096 "rw_mbytes_per_sec": 0, 00:18:51.096 "r_mbytes_per_sec": 0, 00:18:51.096 "w_mbytes_per_sec": 0 00:18:51.096 }, 00:18:51.096 "claimed": true, 00:18:51.096 "claim_type": "exclusive_write", 00:18:51.096 "zoned": false, 00:18:51.096 "supported_io_types": { 00:18:51.096 "read": true, 00:18:51.096 "write": true, 00:18:51.096 "unmap": true, 00:18:51.096 "flush": true, 00:18:51.096 "reset": true, 00:18:51.096 "nvme_admin": false, 00:18:51.096 "nvme_io": false, 00:18:51.096 "nvme_io_md": false, 00:18:51.096 "write_zeroes": true, 00:18:51.096 "zcopy": true, 00:18:51.096 "get_zone_info": false, 00:18:51.096 "zone_management": false, 00:18:51.096 "zone_append": false, 00:18:51.096 "compare": false, 00:18:51.096 "compare_and_write": false, 00:18:51.096 "abort": true, 00:18:51.096 "seek_hole": false, 00:18:51.096 "seek_data": false, 00:18:51.096 "copy": true, 00:18:51.096 "nvme_iov_md": false 00:18:51.096 }, 00:18:51.096 "memory_domains": [ 00:18:51.096 { 00:18:51.096 "dma_device_id": "system", 00:18:51.096 "dma_device_type": 1 00:18:51.096 }, 00:18:51.096 { 00:18:51.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.096 "dma_device_type": 2 00:18:51.096 } 00:18:51.096 ], 00:18:51.096 "driver_specific": {} 00:18:51.096 }' 00:18:51.096 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.096 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.355 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:51.355 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.355 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.355 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:51.355 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.355 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.355 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.355 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.355 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.613 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.613 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:51.613 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:51.613 13:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:51.872 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.872 "name": "BaseBdev2", 00:18:51.872 "aliases": [ 00:18:51.872 "b511884a-37b4-429c-bff8-c0dd37c61c5e" 00:18:51.872 ], 00:18:51.872 "product_name": "Malloc disk", 00:18:51.872 "block_size": 512, 00:18:51.872 "num_blocks": 65536, 00:18:51.872 "uuid": "b511884a-37b4-429c-bff8-c0dd37c61c5e", 00:18:51.872 "assigned_rate_limits": { 00:18:51.872 "rw_ios_per_sec": 0, 00:18:51.872 "rw_mbytes_per_sec": 0, 00:18:51.872 "r_mbytes_per_sec": 0, 00:18:51.872 "w_mbytes_per_sec": 0 00:18:51.872 }, 00:18:51.872 "claimed": true, 00:18:51.872 "claim_type": "exclusive_write", 00:18:51.872 "zoned": false, 00:18:51.872 "supported_io_types": { 00:18:51.872 "read": true, 00:18:51.872 "write": true, 00:18:51.872 "unmap": true, 00:18:51.872 "flush": true, 00:18:51.872 "reset": true, 00:18:51.872 "nvme_admin": false, 00:18:51.872 "nvme_io": false, 00:18:51.872 "nvme_io_md": false, 00:18:51.872 "write_zeroes": true, 00:18:51.872 "zcopy": true, 00:18:51.872 "get_zone_info": false, 00:18:51.872 "zone_management": false, 00:18:51.872 "zone_append": false, 00:18:51.872 "compare": false, 00:18:51.872 "compare_and_write": false, 00:18:51.872 "abort": true, 00:18:51.872 "seek_hole": false, 00:18:51.872 "seek_data": false, 00:18:51.872 "copy": true, 00:18:51.872 "nvme_iov_md": false 00:18:51.872 }, 00:18:51.872 "memory_domains": [ 00:18:51.872 { 00:18:51.872 "dma_device_id": "system", 00:18:51.872 "dma_device_type": 1 00:18:51.872 }, 00:18:51.872 { 00:18:51.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.872 "dma_device_type": 2 00:18:51.872 } 00:18:51.872 ], 00:18:51.872 "driver_specific": {} 00:18:51.872 }' 00:18:51.872 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.872 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.872 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:51.872 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.872 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.872 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:51.872 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.872 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.872 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.872 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.131 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.131 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:52.131 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:52.131 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:52.131 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:52.389 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:52.389 "name": "BaseBdev3", 00:18:52.389 "aliases": [ 00:18:52.389 "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489" 00:18:52.389 ], 00:18:52.389 "product_name": "Malloc disk", 00:18:52.389 "block_size": 512, 00:18:52.389 "num_blocks": 65536, 00:18:52.389 "uuid": "c5c6c9ee-15d0-4de0-818a-a8ed7a0ca489", 00:18:52.389 "assigned_rate_limits": { 00:18:52.389 "rw_ios_per_sec": 0, 00:18:52.389 "rw_mbytes_per_sec": 0, 00:18:52.389 "r_mbytes_per_sec": 0, 00:18:52.389 "w_mbytes_per_sec": 0 00:18:52.389 }, 00:18:52.389 "claimed": true, 00:18:52.389 "claim_type": "exclusive_write", 00:18:52.389 "zoned": false, 00:18:52.389 "supported_io_types": { 00:18:52.389 "read": true, 00:18:52.389 "write": true, 00:18:52.389 "unmap": true, 00:18:52.389 "flush": true, 00:18:52.389 "reset": true, 00:18:52.389 "nvme_admin": false, 00:18:52.389 "nvme_io": false, 00:18:52.389 "nvme_io_md": false, 00:18:52.389 "write_zeroes": true, 00:18:52.389 "zcopy": true, 00:18:52.389 "get_zone_info": false, 00:18:52.389 "zone_management": false, 00:18:52.389 "zone_append": false, 00:18:52.389 "compare": false, 00:18:52.389 "compare_and_write": false, 00:18:52.389 "abort": true, 00:18:52.389 "seek_hole": false, 00:18:52.389 "seek_data": false, 00:18:52.389 "copy": true, 00:18:52.389 "nvme_iov_md": false 00:18:52.390 }, 00:18:52.390 "memory_domains": [ 00:18:52.390 { 00:18:52.390 "dma_device_id": "system", 00:18:52.390 "dma_device_type": 1 00:18:52.390 }, 00:18:52.390 { 00:18:52.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.390 "dma_device_type": 2 00:18:52.390 } 00:18:52.390 ], 00:18:52.390 "driver_specific": {} 00:18:52.390 }' 00:18:52.390 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.390 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.390 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:52.390 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.390 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.390 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:52.390 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.390 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.390 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:52.390 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.648 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.648 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:52.648 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:52.648 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:52.648 13:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:52.908 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:52.908 "name": "BaseBdev4", 00:18:52.908 "aliases": [ 00:18:52.908 "f5d5a3d1-93dd-4761-bd1a-880a45336f18" 00:18:52.908 ], 00:18:52.908 "product_name": "Malloc disk", 00:18:52.908 "block_size": 512, 00:18:52.908 "num_blocks": 65536, 00:18:52.908 "uuid": "f5d5a3d1-93dd-4761-bd1a-880a45336f18", 00:18:52.908 "assigned_rate_limits": { 00:18:52.908 "rw_ios_per_sec": 0, 00:18:52.908 "rw_mbytes_per_sec": 0, 00:18:52.908 "r_mbytes_per_sec": 0, 00:18:52.908 "w_mbytes_per_sec": 0 00:18:52.908 }, 00:18:52.908 "claimed": true, 00:18:52.908 "claim_type": "exclusive_write", 00:18:52.908 "zoned": false, 00:18:52.908 "supported_io_types": { 00:18:52.908 "read": true, 00:18:52.908 "write": true, 00:18:52.908 "unmap": true, 00:18:52.908 "flush": true, 00:18:52.908 "reset": true, 00:18:52.908 "nvme_admin": false, 00:18:52.908 "nvme_io": false, 00:18:52.908 "nvme_io_md": false, 00:18:52.908 "write_zeroes": true, 00:18:52.908 "zcopy": true, 00:18:52.908 "get_zone_info": false, 00:18:52.908 "zone_management": false, 00:18:52.908 "zone_append": false, 00:18:52.908 "compare": false, 00:18:52.908 "compare_and_write": false, 00:18:52.908 "abort": true, 00:18:52.908 "seek_hole": false, 00:18:52.908 "seek_data": false, 00:18:52.908 "copy": true, 00:18:52.908 "nvme_iov_md": false 00:18:52.908 }, 00:18:52.908 "memory_domains": [ 00:18:52.908 { 00:18:52.908 "dma_device_id": "system", 00:18:52.908 "dma_device_type": 1 00:18:52.908 }, 00:18:52.908 { 00:18:52.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.908 "dma_device_type": 2 00:18:52.908 } 00:18:52.908 ], 00:18:52.908 "driver_specific": {} 00:18:52.908 }' 00:18:52.908 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.908 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.908 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:52.908 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.908 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.908 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:52.908 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.908 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.908 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:52.908 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:53.167 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:53.167 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:53.167 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:53.424 [2024-07-25 13:19:03.685576] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:53.425 [2024-07-25 13:19:03.685599] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.425 [2024-07-25 13:19:03.685643] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.425 [2024-07-25 13:19:03.685695] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.425 [2024-07-25 13:19:03.685706] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1455360 name Existed_Raid, state offline 00:18:53.425 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 905964 00:18:53.425 13:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 905964 ']' 00:18:53.425 13:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 905964 00:18:53.425 13:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:53.425 13:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:53.425 13:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 905964 00:18:53.425 13:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:53.425 13:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:53.425 13:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 905964' 00:18:53.425 killing process with pid 905964 00:18:53.425 13:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 905964 00:18:53.425 [2024-07-25 13:19:03.756809] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:53.425 13:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 905964 00:18:53.425 [2024-07-25 13:19:03.787381] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:53.683 13:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:53.683 00:18:53.683 real 0m31.077s 00:18:53.683 user 0m57.358s 00:18:53.683 sys 0m5.334s 00:18:53.683 13:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:53.683 13:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.683 ************************************ 00:18:53.683 END TEST raid_state_function_test_sb 00:18:53.683 ************************************ 00:18:53.683 13:19:04 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:53.683 13:19:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:53.683 13:19:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:53.683 13:19:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.683 ************************************ 00:18:53.683 START TEST raid_superblock_test 00:18:53.683 ************************************ 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=911910 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 911910 /var/tmp/spdk-raid.sock 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 911910 ']' 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:53.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:53.683 13:19:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.683 [2024-07-25 13:19:04.126191] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:18:53.683 [2024-07-25 13:19:04.126248] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911910 ] 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:53.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:53.942 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:53.942 [2024-07-25 13:19:04.258174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.942 [2024-07-25 13:19:04.344436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.942 [2024-07-25 13:19:04.400937] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.942 [2024-07-25 13:19:04.400975] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.880 13:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:54.880 13:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:54.880 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:18:54.880 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:54.880 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:18:54.880 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:18:54.880 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:54.880 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:54.880 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:54.880 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:54.880 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:54.880 malloc1 00:18:54.880 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:55.139 [2024-07-25 13:19:05.466240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:55.139 [2024-07-25 13:19:05.466282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.139 [2024-07-25 13:19:05.466300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eb62f0 00:18:55.139 [2024-07-25 13:19:05.466311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.139 [2024-07-25 13:19:05.467815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.139 [2024-07-25 13:19:05.467841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:55.139 pt1 00:18:55.139 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:55.139 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:55.139 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:18:55.139 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:18:55.139 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:55.139 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:55.139 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:55.139 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:55.139 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:55.398 malloc2 00:18:55.398 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:55.657 [2024-07-25 13:19:05.927822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:55.657 [2024-07-25 13:19:05.927860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.657 [2024-07-25 13:19:05.927875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x204df70 00:18:55.657 [2024-07-25 13:19:05.927887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.657 [2024-07-25 13:19:05.929303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.657 [2024-07-25 13:19:05.929328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:55.657 pt2 00:18:55.657 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:55.657 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:55.657 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:18:55.657 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:18:55.657 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:55.657 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:55.657 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:55.657 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:55.657 13:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:55.917 malloc3 00:18:55.917 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:55.917 [2024-07-25 13:19:06.385353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:55.917 [2024-07-25 13:19:06.385394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.917 [2024-07-25 13:19:06.385410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2051830 00:18:55.917 [2024-07-25 13:19:06.385421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.917 [2024-07-25 13:19:06.386755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.917 [2024-07-25 13:19:06.386782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:55.917 pt3 00:18:55.917 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:55.917 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:55.917 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:18:55.917 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:18:55.917 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:55.917 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:55.917 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:55.917 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:55.917 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:56.175 malloc4 00:18:56.175 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:56.434 [2024-07-25 13:19:06.850915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:56.434 [2024-07-25 13:19:06.850956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.434 [2024-07-25 13:19:06.850977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2052f10 00:18:56.434 [2024-07-25 13:19:06.850988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.434 [2024-07-25 13:19:06.852336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.434 [2024-07-25 13:19:06.852362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:56.434 pt4 00:18:56.434 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:56.434 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:56.434 13:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:56.693 [2024-07-25 13:19:07.079550] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:56.693 [2024-07-25 13:19:07.080715] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:56.693 [2024-07-25 13:19:07.080777] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:56.693 [2024-07-25 13:19:07.080816] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:56.693 [2024-07-25 13:19:07.080959] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2054190 00:18:56.693 [2024-07-25 13:19:07.080969] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:56.693 [2024-07-25 13:19:07.081162] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2052c30 00:18:56.693 [2024-07-25 13:19:07.081290] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2054190 00:18:56.693 [2024-07-25 13:19:07.081299] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2054190 00:18:56.693 [2024-07-25 13:19:07.081398] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.693 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:56.693 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:56.693 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:56.693 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:56.693 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:56.693 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:56.693 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.693 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.694 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.694 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.694 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.694 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.953 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:56.953 "name": "raid_bdev1", 00:18:56.953 "uuid": "3443fb72-2096-4488-a4ca-8d354db3b190", 00:18:56.953 "strip_size_kb": 64, 00:18:56.953 "state": "online", 00:18:56.953 "raid_level": "raid0", 00:18:56.953 "superblock": true, 00:18:56.953 "num_base_bdevs": 4, 00:18:56.953 "num_base_bdevs_discovered": 4, 00:18:56.953 "num_base_bdevs_operational": 4, 00:18:56.953 "base_bdevs_list": [ 00:18:56.953 { 00:18:56.953 "name": "pt1", 00:18:56.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:56.953 "is_configured": true, 00:18:56.953 "data_offset": 2048, 00:18:56.953 "data_size": 63488 00:18:56.953 }, 00:18:56.953 { 00:18:56.953 "name": "pt2", 00:18:56.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.953 "is_configured": true, 00:18:56.953 "data_offset": 2048, 00:18:56.953 "data_size": 63488 00:18:56.953 }, 00:18:56.953 { 00:18:56.953 "name": "pt3", 00:18:56.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:56.953 "is_configured": true, 00:18:56.953 "data_offset": 2048, 00:18:56.953 "data_size": 63488 00:18:56.953 }, 00:18:56.953 { 00:18:56.953 "name": "pt4", 00:18:56.953 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:56.953 "is_configured": true, 00:18:56.953 "data_offset": 2048, 00:18:56.953 "data_size": 63488 00:18:56.953 } 00:18:56.953 ] 00:18:56.953 }' 00:18:56.953 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:56.953 13:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.521 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:18:57.521 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:57.521 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:57.521 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:57.521 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:57.521 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:57.521 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:57.521 13:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:57.780 [2024-07-25 13:19:08.122713] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.780 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:57.780 "name": "raid_bdev1", 00:18:57.780 "aliases": [ 00:18:57.780 "3443fb72-2096-4488-a4ca-8d354db3b190" 00:18:57.780 ], 00:18:57.780 "product_name": "Raid Volume", 00:18:57.780 "block_size": 512, 00:18:57.780 "num_blocks": 253952, 00:18:57.780 "uuid": "3443fb72-2096-4488-a4ca-8d354db3b190", 00:18:57.780 "assigned_rate_limits": { 00:18:57.780 "rw_ios_per_sec": 0, 00:18:57.780 "rw_mbytes_per_sec": 0, 00:18:57.780 "r_mbytes_per_sec": 0, 00:18:57.780 "w_mbytes_per_sec": 0 00:18:57.780 }, 00:18:57.780 "claimed": false, 00:18:57.780 "zoned": false, 00:18:57.780 "supported_io_types": { 00:18:57.780 "read": true, 00:18:57.780 "write": true, 00:18:57.780 "unmap": true, 00:18:57.780 "flush": true, 00:18:57.780 "reset": true, 00:18:57.780 "nvme_admin": false, 00:18:57.780 "nvme_io": false, 00:18:57.780 "nvme_io_md": false, 00:18:57.780 "write_zeroes": true, 00:18:57.780 "zcopy": false, 00:18:57.780 "get_zone_info": false, 00:18:57.780 "zone_management": false, 00:18:57.780 "zone_append": false, 00:18:57.780 "compare": false, 00:18:57.780 "compare_and_write": false, 00:18:57.780 "abort": false, 00:18:57.780 "seek_hole": false, 00:18:57.780 "seek_data": false, 00:18:57.780 "copy": false, 00:18:57.780 "nvme_iov_md": false 00:18:57.780 }, 00:18:57.780 "memory_domains": [ 00:18:57.780 { 00:18:57.780 "dma_device_id": "system", 00:18:57.780 "dma_device_type": 1 00:18:57.780 }, 00:18:57.780 { 00:18:57.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.780 "dma_device_type": 2 00:18:57.780 }, 00:18:57.780 { 00:18:57.780 "dma_device_id": "system", 00:18:57.780 "dma_device_type": 1 00:18:57.780 }, 00:18:57.780 { 00:18:57.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.780 "dma_device_type": 2 00:18:57.780 }, 00:18:57.780 { 00:18:57.780 "dma_device_id": "system", 00:18:57.780 "dma_device_type": 1 00:18:57.780 }, 00:18:57.780 { 00:18:57.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.780 "dma_device_type": 2 00:18:57.780 }, 00:18:57.780 { 00:18:57.780 "dma_device_id": "system", 00:18:57.780 "dma_device_type": 1 00:18:57.780 }, 00:18:57.780 { 00:18:57.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.780 "dma_device_type": 2 00:18:57.780 } 00:18:57.780 ], 00:18:57.780 "driver_specific": { 00:18:57.780 "raid": { 00:18:57.780 "uuid": "3443fb72-2096-4488-a4ca-8d354db3b190", 00:18:57.780 "strip_size_kb": 64, 00:18:57.780 "state": "online", 00:18:57.780 "raid_level": "raid0", 00:18:57.780 "superblock": true, 00:18:57.780 "num_base_bdevs": 4, 00:18:57.780 "num_base_bdevs_discovered": 4, 00:18:57.780 "num_base_bdevs_operational": 4, 00:18:57.780 "base_bdevs_list": [ 00:18:57.780 { 00:18:57.780 "name": "pt1", 00:18:57.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:57.780 "is_configured": true, 00:18:57.780 "data_offset": 2048, 00:18:57.780 "data_size": 63488 00:18:57.780 }, 00:18:57.780 { 00:18:57.780 "name": "pt2", 00:18:57.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.780 "is_configured": true, 00:18:57.780 "data_offset": 2048, 00:18:57.780 "data_size": 63488 00:18:57.780 }, 00:18:57.780 { 00:18:57.780 "name": "pt3", 00:18:57.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:57.780 "is_configured": true, 00:18:57.780 "data_offset": 2048, 00:18:57.780 "data_size": 63488 00:18:57.780 }, 00:18:57.780 { 00:18:57.780 "name": "pt4", 00:18:57.780 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:57.780 "is_configured": true, 00:18:57.780 "data_offset": 2048, 00:18:57.780 "data_size": 63488 00:18:57.780 } 00:18:57.780 ] 00:18:57.780 } 00:18:57.780 } 00:18:57.780 }' 00:18:57.780 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:57.780 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:57.780 pt2 00:18:57.780 pt3 00:18:57.780 pt4' 00:18:57.780 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:57.780 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:57.780 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:58.039 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:58.040 "name": "pt1", 00:18:58.040 "aliases": [ 00:18:58.040 "00000000-0000-0000-0000-000000000001" 00:18:58.040 ], 00:18:58.040 "product_name": "passthru", 00:18:58.040 "block_size": 512, 00:18:58.040 "num_blocks": 65536, 00:18:58.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.040 "assigned_rate_limits": { 00:18:58.040 "rw_ios_per_sec": 0, 00:18:58.040 "rw_mbytes_per_sec": 0, 00:18:58.040 "r_mbytes_per_sec": 0, 00:18:58.040 "w_mbytes_per_sec": 0 00:18:58.040 }, 00:18:58.040 "claimed": true, 00:18:58.040 "claim_type": "exclusive_write", 00:18:58.040 "zoned": false, 00:18:58.040 "supported_io_types": { 00:18:58.040 "read": true, 00:18:58.040 "write": true, 00:18:58.040 "unmap": true, 00:18:58.040 "flush": true, 00:18:58.040 "reset": true, 00:18:58.040 "nvme_admin": false, 00:18:58.040 "nvme_io": false, 00:18:58.040 "nvme_io_md": false, 00:18:58.040 "write_zeroes": true, 00:18:58.040 "zcopy": true, 00:18:58.040 "get_zone_info": false, 00:18:58.040 "zone_management": false, 00:18:58.040 "zone_append": false, 00:18:58.040 "compare": false, 00:18:58.040 "compare_and_write": false, 00:18:58.040 "abort": true, 00:18:58.040 "seek_hole": false, 00:18:58.040 "seek_data": false, 00:18:58.040 "copy": true, 00:18:58.040 "nvme_iov_md": false 00:18:58.040 }, 00:18:58.040 "memory_domains": [ 00:18:58.040 { 00:18:58.040 "dma_device_id": "system", 00:18:58.040 "dma_device_type": 1 00:18:58.040 }, 00:18:58.040 { 00:18:58.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.040 "dma_device_type": 2 00:18:58.040 } 00:18:58.040 ], 00:18:58.040 "driver_specific": { 00:18:58.040 "passthru": { 00:18:58.040 "name": "pt1", 00:18:58.040 "base_bdev_name": "malloc1" 00:18:58.040 } 00:18:58.040 } 00:18:58.040 }' 00:18:58.040 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.040 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.040 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:58.040 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.299 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.299 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:58.299 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.299 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.299 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:58.299 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.299 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.299 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:58.299 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:58.299 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:58.299 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:58.559 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:58.559 "name": "pt2", 00:18:58.559 "aliases": [ 00:18:58.559 "00000000-0000-0000-0000-000000000002" 00:18:58.559 ], 00:18:58.559 "product_name": "passthru", 00:18:58.559 "block_size": 512, 00:18:58.559 "num_blocks": 65536, 00:18:58.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.559 "assigned_rate_limits": { 00:18:58.559 "rw_ios_per_sec": 0, 00:18:58.559 "rw_mbytes_per_sec": 0, 00:18:58.559 "r_mbytes_per_sec": 0, 00:18:58.559 "w_mbytes_per_sec": 0 00:18:58.559 }, 00:18:58.559 "claimed": true, 00:18:58.559 "claim_type": "exclusive_write", 00:18:58.559 "zoned": false, 00:18:58.559 "supported_io_types": { 00:18:58.559 "read": true, 00:18:58.559 "write": true, 00:18:58.559 "unmap": true, 00:18:58.559 "flush": true, 00:18:58.559 "reset": true, 00:18:58.559 "nvme_admin": false, 00:18:58.559 "nvme_io": false, 00:18:58.559 "nvme_io_md": false, 00:18:58.559 "write_zeroes": true, 00:18:58.559 "zcopy": true, 00:18:58.559 "get_zone_info": false, 00:18:58.559 "zone_management": false, 00:18:58.559 "zone_append": false, 00:18:58.559 "compare": false, 00:18:58.559 "compare_and_write": false, 00:18:58.559 "abort": true, 00:18:58.559 "seek_hole": false, 00:18:58.559 "seek_data": false, 00:18:58.559 "copy": true, 00:18:58.559 "nvme_iov_md": false 00:18:58.559 }, 00:18:58.559 "memory_domains": [ 00:18:58.559 { 00:18:58.559 "dma_device_id": "system", 00:18:58.559 "dma_device_type": 1 00:18:58.559 }, 00:18:58.559 { 00:18:58.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.559 "dma_device_type": 2 00:18:58.559 } 00:18:58.559 ], 00:18:58.559 "driver_specific": { 00:18:58.559 "passthru": { 00:18:58.559 "name": "pt2", 00:18:58.559 "base_bdev_name": "malloc2" 00:18:58.559 } 00:18:58.559 } 00:18:58.559 }' 00:18:58.559 13:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.559 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.559 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:58.559 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.818 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.818 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:58.818 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.818 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.818 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:58.818 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.818 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.077 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:59.077 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:59.077 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:59.077 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:59.077 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:59.077 "name": "pt3", 00:18:59.077 "aliases": [ 00:18:59.077 "00000000-0000-0000-0000-000000000003" 00:18:59.077 ], 00:18:59.077 "product_name": "passthru", 00:18:59.077 "block_size": 512, 00:18:59.077 "num_blocks": 65536, 00:18:59.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.077 "assigned_rate_limits": { 00:18:59.077 "rw_ios_per_sec": 0, 00:18:59.077 "rw_mbytes_per_sec": 0, 00:18:59.077 "r_mbytes_per_sec": 0, 00:18:59.077 "w_mbytes_per_sec": 0 00:18:59.077 }, 00:18:59.077 "claimed": true, 00:18:59.077 "claim_type": "exclusive_write", 00:18:59.077 "zoned": false, 00:18:59.077 "supported_io_types": { 00:18:59.077 "read": true, 00:18:59.077 "write": true, 00:18:59.077 "unmap": true, 00:18:59.077 "flush": true, 00:18:59.078 "reset": true, 00:18:59.078 "nvme_admin": false, 00:18:59.078 "nvme_io": false, 00:18:59.078 "nvme_io_md": false, 00:18:59.078 "write_zeroes": true, 00:18:59.078 "zcopy": true, 00:18:59.078 "get_zone_info": false, 00:18:59.078 "zone_management": false, 00:18:59.078 "zone_append": false, 00:18:59.078 "compare": false, 00:18:59.078 "compare_and_write": false, 00:18:59.078 "abort": true, 00:18:59.078 "seek_hole": false, 00:18:59.078 "seek_data": false, 00:18:59.078 "copy": true, 00:18:59.078 "nvme_iov_md": false 00:18:59.078 }, 00:18:59.078 "memory_domains": [ 00:18:59.078 { 00:18:59.078 "dma_device_id": "system", 00:18:59.078 "dma_device_type": 1 00:18:59.078 }, 00:18:59.078 { 00:18:59.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.078 "dma_device_type": 2 00:18:59.078 } 00:18:59.078 ], 00:18:59.078 "driver_specific": { 00:18:59.078 "passthru": { 00:18:59.078 "name": "pt3", 00:18:59.078 "base_bdev_name": "malloc3" 00:18:59.078 } 00:18:59.078 } 00:18:59.078 }' 00:18:59.078 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.336 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.336 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:59.336 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.336 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.336 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:59.336 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.336 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.336 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:59.336 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.595 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.595 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:59.595 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:59.595 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:59.595 13:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:59.854 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:59.854 "name": "pt4", 00:18:59.854 "aliases": [ 00:18:59.854 "00000000-0000-0000-0000-000000000004" 00:18:59.854 ], 00:18:59.854 "product_name": "passthru", 00:18:59.854 "block_size": 512, 00:18:59.854 "num_blocks": 65536, 00:18:59.854 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:59.854 "assigned_rate_limits": { 00:18:59.854 "rw_ios_per_sec": 0, 00:18:59.854 "rw_mbytes_per_sec": 0, 00:18:59.854 "r_mbytes_per_sec": 0, 00:18:59.854 "w_mbytes_per_sec": 0 00:18:59.854 }, 00:18:59.854 "claimed": true, 00:18:59.854 "claim_type": "exclusive_write", 00:18:59.854 "zoned": false, 00:18:59.854 "supported_io_types": { 00:18:59.854 "read": true, 00:18:59.854 "write": true, 00:18:59.854 "unmap": true, 00:18:59.854 "flush": true, 00:18:59.854 "reset": true, 00:18:59.854 "nvme_admin": false, 00:18:59.854 "nvme_io": false, 00:18:59.854 "nvme_io_md": false, 00:18:59.854 "write_zeroes": true, 00:18:59.854 "zcopy": true, 00:18:59.854 "get_zone_info": false, 00:18:59.854 "zone_management": false, 00:18:59.854 "zone_append": false, 00:18:59.854 "compare": false, 00:18:59.854 "compare_and_write": false, 00:18:59.854 "abort": true, 00:18:59.854 "seek_hole": false, 00:18:59.854 "seek_data": false, 00:18:59.854 "copy": true, 00:18:59.854 "nvme_iov_md": false 00:18:59.854 }, 00:18:59.854 "memory_domains": [ 00:18:59.854 { 00:18:59.854 "dma_device_id": "system", 00:18:59.854 "dma_device_type": 1 00:18:59.854 }, 00:18:59.854 { 00:18:59.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.854 "dma_device_type": 2 00:18:59.854 } 00:18:59.854 ], 00:18:59.854 "driver_specific": { 00:18:59.854 "passthru": { 00:18:59.854 "name": "pt4", 00:18:59.854 "base_bdev_name": "malloc4" 00:18:59.854 } 00:18:59.854 } 00:18:59.854 }' 00:18:59.854 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.854 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:59.854 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:59.854 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.854 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.854 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:59.854 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:00.114 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:00.114 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:00.114 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.114 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:00.114 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:00.114 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:00.114 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:19:00.373 [2024-07-25 13:19:10.701585] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.373 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=3443fb72-2096-4488-a4ca-8d354db3b190 00:19:00.373 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 3443fb72-2096-4488-a4ca-8d354db3b190 ']' 00:19:00.373 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:00.632 [2024-07-25 13:19:10.929897] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:00.632 [2024-07-25 13:19:10.929915] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.632 [2024-07-25 13:19:10.929961] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.632 [2024-07-25 13:19:10.930016] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.632 [2024-07-25 13:19:10.930031] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2054190 name raid_bdev1, state offline 00:19:00.632 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.632 13:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:19:00.891 13:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:19:00.891 13:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:19:00.891 13:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:00.891 13:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:01.150 13:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.150 13:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:01.150 13:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.150 13:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:01.409 13:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:01.409 13:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:01.668 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:01.668 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:01.928 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:02.187 [2024-07-25 13:19:12.526126] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:02.187 [2024-07-25 13:19:12.527383] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:02.187 [2024-07-25 13:19:12.527424] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:02.187 [2024-07-25 13:19:12.527455] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:02.187 [2024-07-25 13:19:12.527495] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:02.187 [2024-07-25 13:19:12.527530] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:02.187 [2024-07-25 13:19:12.527552] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:02.187 [2024-07-25 13:19:12.527573] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:02.187 [2024-07-25 13:19:12.527592] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.187 [2024-07-25 13:19:12.527601] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20528d0 name raid_bdev1, state configuring 00:19:02.187 request: 00:19:02.187 { 00:19:02.187 "name": "raid_bdev1", 00:19:02.187 "raid_level": "raid0", 00:19:02.187 "base_bdevs": [ 00:19:02.187 "malloc1", 00:19:02.187 "malloc2", 00:19:02.187 "malloc3", 00:19:02.187 "malloc4" 00:19:02.187 ], 00:19:02.187 "strip_size_kb": 64, 00:19:02.187 "superblock": false, 00:19:02.187 "method": "bdev_raid_create", 00:19:02.187 "req_id": 1 00:19:02.187 } 00:19:02.187 Got JSON-RPC error response 00:19:02.187 response: 00:19:02.187 { 00:19:02.187 "code": -17, 00:19:02.187 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:02.187 } 00:19:02.187 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:02.187 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:02.187 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:02.187 13:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:02.187 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.187 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:19:02.447 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:19:02.447 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:19:02.447 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:02.705 [2024-07-25 13:19:12.975245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:02.705 [2024-07-25 13:19:12.975281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.705 [2024-07-25 13:19:12.975297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x204fea0 00:19:02.705 [2024-07-25 13:19:12.975308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.706 [2024-07-25 13:19:12.976743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.706 [2024-07-25 13:19:12.976771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:02.706 [2024-07-25 13:19:12.976825] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:02.706 [2024-07-25 13:19:12.976848] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.706 pt1 00:19:02.706 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:19:02.706 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:02.706 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:02.706 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:02.706 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:02.706 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:02.706 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.706 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.706 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.706 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.706 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.706 13:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.967 13:19:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.967 "name": "raid_bdev1", 00:19:02.967 "uuid": "3443fb72-2096-4488-a4ca-8d354db3b190", 00:19:02.967 "strip_size_kb": 64, 00:19:02.967 "state": "configuring", 00:19:02.967 "raid_level": "raid0", 00:19:02.967 "superblock": true, 00:19:02.967 "num_base_bdevs": 4, 00:19:02.967 "num_base_bdevs_discovered": 1, 00:19:02.967 "num_base_bdevs_operational": 4, 00:19:02.967 "base_bdevs_list": [ 00:19:02.967 { 00:19:02.967 "name": "pt1", 00:19:02.967 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.967 "is_configured": true, 00:19:02.967 "data_offset": 2048, 00:19:02.967 "data_size": 63488 00:19:02.967 }, 00:19:02.967 { 00:19:02.967 "name": null, 00:19:02.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.967 "is_configured": false, 00:19:02.967 "data_offset": 2048, 00:19:02.967 "data_size": 63488 00:19:02.967 }, 00:19:02.967 { 00:19:02.967 "name": null, 00:19:02.967 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:02.967 "is_configured": false, 00:19:02.967 "data_offset": 2048, 00:19:02.967 "data_size": 63488 00:19:02.967 }, 00:19:02.967 { 00:19:02.967 "name": null, 00:19:02.967 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:02.967 "is_configured": false, 00:19:02.967 "data_offset": 2048, 00:19:02.967 "data_size": 63488 00:19:02.967 } 00:19:02.967 ] 00:19:02.967 }' 00:19:02.967 13:19:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.967 13:19:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.535 13:19:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:19:03.535 13:19:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:03.535 [2024-07-25 13:19:13.985914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:03.535 [2024-07-25 13:19:13.985956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.535 [2024-07-25 13:19:13.985972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20528d0 00:19:03.535 [2024-07-25 13:19:13.985984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.535 [2024-07-25 13:19:13.986291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.535 [2024-07-25 13:19:13.986307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:03.535 [2024-07-25 13:19:13.986361] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:03.535 [2024-07-25 13:19:13.986377] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:03.535 pt2 00:19:03.535 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:03.839 [2024-07-25 13:19:14.214534] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:03.840 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:19:03.840 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:03.840 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:03.840 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:03.840 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:03.840 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:03.840 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.840 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.840 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.840 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.840 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.840 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.110 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.110 "name": "raid_bdev1", 00:19:04.110 "uuid": "3443fb72-2096-4488-a4ca-8d354db3b190", 00:19:04.110 "strip_size_kb": 64, 00:19:04.110 "state": "configuring", 00:19:04.110 "raid_level": "raid0", 00:19:04.110 "superblock": true, 00:19:04.110 "num_base_bdevs": 4, 00:19:04.110 "num_base_bdevs_discovered": 1, 00:19:04.110 "num_base_bdevs_operational": 4, 00:19:04.110 "base_bdevs_list": [ 00:19:04.110 { 00:19:04.110 "name": "pt1", 00:19:04.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.110 "is_configured": true, 00:19:04.110 "data_offset": 2048, 00:19:04.110 "data_size": 63488 00:19:04.110 }, 00:19:04.110 { 00:19:04.110 "name": null, 00:19:04.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.110 "is_configured": false, 00:19:04.110 "data_offset": 2048, 00:19:04.110 "data_size": 63488 00:19:04.110 }, 00:19:04.110 { 00:19:04.110 "name": null, 00:19:04.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:04.110 "is_configured": false, 00:19:04.110 "data_offset": 2048, 00:19:04.110 "data_size": 63488 00:19:04.110 }, 00:19:04.110 { 00:19:04.110 "name": null, 00:19:04.110 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:04.110 "is_configured": false, 00:19:04.110 "data_offset": 2048, 00:19:04.110 "data_size": 63488 00:19:04.110 } 00:19:04.110 ] 00:19:04.110 }' 00:19:04.110 13:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.110 13:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.676 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:19:04.676 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:04.676 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:04.935 [2024-07-25 13:19:15.261273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:04.935 [2024-07-25 13:19:15.261315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.935 [2024-07-25 13:19:15.261331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2055650 00:19:04.935 [2024-07-25 13:19:15.261342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.935 [2024-07-25 13:19:15.261647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.935 [2024-07-25 13:19:15.261662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:04.935 [2024-07-25 13:19:15.261717] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:04.935 [2024-07-25 13:19:15.261733] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.935 pt2 00:19:04.936 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:04.936 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:04.936 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:05.194 [2024-07-25 13:19:15.481846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:05.194 [2024-07-25 13:19:15.481874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.194 [2024-07-25 13:19:15.481888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2057350 00:19:05.194 [2024-07-25 13:19:15.481899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.194 [2024-07-25 13:19:15.482153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.194 [2024-07-25 13:19:15.482174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:05.194 [2024-07-25 13:19:15.482218] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:05.194 [2024-07-25 13:19:15.482233] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:05.194 pt3 00:19:05.194 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:05.194 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:05.194 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:05.453 [2024-07-25 13:19:15.710447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:05.453 [2024-07-25 13:19:15.710473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.453 [2024-07-25 13:19:15.710485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x204ee10 00:19:05.453 [2024-07-25 13:19:15.710496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.453 [2024-07-25 13:19:15.710734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.453 [2024-07-25 13:19:15.710750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:05.453 [2024-07-25 13:19:15.710791] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:05.453 [2024-07-25 13:19:15.710806] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:05.453 [2024-07-25 13:19:15.710908] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x204f8e0 00:19:05.453 [2024-07-25 13:19:15.710918] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:05.453 [2024-07-25 13:19:15.711065] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2050d40 00:19:05.453 [2024-07-25 13:19:15.711190] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x204f8e0 00:19:05.453 [2024-07-25 13:19:15.711199] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x204f8e0 00:19:05.453 [2024-07-25 13:19:15.711283] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.453 pt4 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.453 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.712 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:05.712 "name": "raid_bdev1", 00:19:05.712 "uuid": "3443fb72-2096-4488-a4ca-8d354db3b190", 00:19:05.712 "strip_size_kb": 64, 00:19:05.712 "state": "online", 00:19:05.712 "raid_level": "raid0", 00:19:05.712 "superblock": true, 00:19:05.712 "num_base_bdevs": 4, 00:19:05.712 "num_base_bdevs_discovered": 4, 00:19:05.712 "num_base_bdevs_operational": 4, 00:19:05.712 "base_bdevs_list": [ 00:19:05.712 { 00:19:05.712 "name": "pt1", 00:19:05.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.712 "is_configured": true, 00:19:05.712 "data_offset": 2048, 00:19:05.712 "data_size": 63488 00:19:05.712 }, 00:19:05.712 { 00:19:05.712 "name": "pt2", 00:19:05.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.712 "is_configured": true, 00:19:05.712 "data_offset": 2048, 00:19:05.712 "data_size": 63488 00:19:05.712 }, 00:19:05.712 { 00:19:05.712 "name": "pt3", 00:19:05.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:05.712 "is_configured": true, 00:19:05.712 "data_offset": 2048, 00:19:05.712 "data_size": 63488 00:19:05.712 }, 00:19:05.712 { 00:19:05.712 "name": "pt4", 00:19:05.712 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:05.712 "is_configured": true, 00:19:05.712 "data_offset": 2048, 00:19:05.712 "data_size": 63488 00:19:05.712 } 00:19:05.712 ] 00:19:05.712 }' 00:19:05.712 13:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:05.712 13:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.279 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:19:06.280 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:06.280 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:06.280 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:06.280 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:06.280 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:06.280 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:06.280 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:06.280 [2024-07-25 13:19:16.737436] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.280 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:06.280 "name": "raid_bdev1", 00:19:06.280 "aliases": [ 00:19:06.280 "3443fb72-2096-4488-a4ca-8d354db3b190" 00:19:06.280 ], 00:19:06.280 "product_name": "Raid Volume", 00:19:06.280 "block_size": 512, 00:19:06.280 "num_blocks": 253952, 00:19:06.280 "uuid": "3443fb72-2096-4488-a4ca-8d354db3b190", 00:19:06.280 "assigned_rate_limits": { 00:19:06.280 "rw_ios_per_sec": 0, 00:19:06.280 "rw_mbytes_per_sec": 0, 00:19:06.280 "r_mbytes_per_sec": 0, 00:19:06.280 "w_mbytes_per_sec": 0 00:19:06.280 }, 00:19:06.280 "claimed": false, 00:19:06.280 "zoned": false, 00:19:06.280 "supported_io_types": { 00:19:06.280 "read": true, 00:19:06.280 "write": true, 00:19:06.280 "unmap": true, 00:19:06.280 "flush": true, 00:19:06.280 "reset": true, 00:19:06.280 "nvme_admin": false, 00:19:06.280 "nvme_io": false, 00:19:06.280 "nvme_io_md": false, 00:19:06.280 "write_zeroes": true, 00:19:06.280 "zcopy": false, 00:19:06.280 "get_zone_info": false, 00:19:06.280 "zone_management": false, 00:19:06.280 "zone_append": false, 00:19:06.280 "compare": false, 00:19:06.280 "compare_and_write": false, 00:19:06.280 "abort": false, 00:19:06.280 "seek_hole": false, 00:19:06.280 "seek_data": false, 00:19:06.280 "copy": false, 00:19:06.280 "nvme_iov_md": false 00:19:06.280 }, 00:19:06.280 "memory_domains": [ 00:19:06.280 { 00:19:06.280 "dma_device_id": "system", 00:19:06.280 "dma_device_type": 1 00:19:06.280 }, 00:19:06.280 { 00:19:06.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.280 "dma_device_type": 2 00:19:06.280 }, 00:19:06.280 { 00:19:06.280 "dma_device_id": "system", 00:19:06.280 "dma_device_type": 1 00:19:06.280 }, 00:19:06.280 { 00:19:06.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.280 "dma_device_type": 2 00:19:06.280 }, 00:19:06.280 { 00:19:06.280 "dma_device_id": "system", 00:19:06.280 "dma_device_type": 1 00:19:06.280 }, 00:19:06.280 { 00:19:06.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.280 "dma_device_type": 2 00:19:06.280 }, 00:19:06.280 { 00:19:06.280 "dma_device_id": "system", 00:19:06.280 "dma_device_type": 1 00:19:06.280 }, 00:19:06.280 { 00:19:06.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.280 "dma_device_type": 2 00:19:06.280 } 00:19:06.280 ], 00:19:06.280 "driver_specific": { 00:19:06.280 "raid": { 00:19:06.280 "uuid": "3443fb72-2096-4488-a4ca-8d354db3b190", 00:19:06.280 "strip_size_kb": 64, 00:19:06.280 "state": "online", 00:19:06.280 "raid_level": "raid0", 00:19:06.280 "superblock": true, 00:19:06.280 "num_base_bdevs": 4, 00:19:06.280 "num_base_bdevs_discovered": 4, 00:19:06.280 "num_base_bdevs_operational": 4, 00:19:06.280 "base_bdevs_list": [ 00:19:06.280 { 00:19:06.280 "name": "pt1", 00:19:06.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.280 "is_configured": true, 00:19:06.280 "data_offset": 2048, 00:19:06.280 "data_size": 63488 00:19:06.280 }, 00:19:06.280 { 00:19:06.280 "name": "pt2", 00:19:06.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.280 "is_configured": true, 00:19:06.280 "data_offset": 2048, 00:19:06.280 "data_size": 63488 00:19:06.280 }, 00:19:06.280 { 00:19:06.280 "name": "pt3", 00:19:06.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:06.280 "is_configured": true, 00:19:06.280 "data_offset": 2048, 00:19:06.280 "data_size": 63488 00:19:06.280 }, 00:19:06.280 { 00:19:06.280 "name": "pt4", 00:19:06.280 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:06.280 "is_configured": true, 00:19:06.280 "data_offset": 2048, 00:19:06.280 "data_size": 63488 00:19:06.280 } 00:19:06.280 ] 00:19:06.280 } 00:19:06.280 } 00:19:06.280 }' 00:19:06.280 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:06.539 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:06.539 pt2 00:19:06.539 pt3 00:19:06.539 pt4' 00:19:06.539 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:06.539 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:06.539 13:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:06.798 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:06.798 "name": "pt1", 00:19:06.798 "aliases": [ 00:19:06.798 "00000000-0000-0000-0000-000000000001" 00:19:06.798 ], 00:19:06.798 "product_name": "passthru", 00:19:06.798 "block_size": 512, 00:19:06.798 "num_blocks": 65536, 00:19:06.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.798 "assigned_rate_limits": { 00:19:06.798 "rw_ios_per_sec": 0, 00:19:06.798 "rw_mbytes_per_sec": 0, 00:19:06.798 "r_mbytes_per_sec": 0, 00:19:06.798 "w_mbytes_per_sec": 0 00:19:06.798 }, 00:19:06.798 "claimed": true, 00:19:06.798 "claim_type": "exclusive_write", 00:19:06.798 "zoned": false, 00:19:06.798 "supported_io_types": { 00:19:06.798 "read": true, 00:19:06.798 "write": true, 00:19:06.798 "unmap": true, 00:19:06.798 "flush": true, 00:19:06.798 "reset": true, 00:19:06.798 "nvme_admin": false, 00:19:06.798 "nvme_io": false, 00:19:06.798 "nvme_io_md": false, 00:19:06.798 "write_zeroes": true, 00:19:06.798 "zcopy": true, 00:19:06.798 "get_zone_info": false, 00:19:06.798 "zone_management": false, 00:19:06.798 "zone_append": false, 00:19:06.798 "compare": false, 00:19:06.798 "compare_and_write": false, 00:19:06.798 "abort": true, 00:19:06.798 "seek_hole": false, 00:19:06.798 "seek_data": false, 00:19:06.798 "copy": true, 00:19:06.798 "nvme_iov_md": false 00:19:06.798 }, 00:19:06.798 "memory_domains": [ 00:19:06.798 { 00:19:06.798 "dma_device_id": "system", 00:19:06.798 "dma_device_type": 1 00:19:06.798 }, 00:19:06.798 { 00:19:06.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.798 "dma_device_type": 2 00:19:06.798 } 00:19:06.798 ], 00:19:06.798 "driver_specific": { 00:19:06.798 "passthru": { 00:19:06.798 "name": "pt1", 00:19:06.798 "base_bdev_name": "malloc1" 00:19:06.798 } 00:19:06.798 } 00:19:06.798 }' 00:19:06.798 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.798 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.798 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:06.798 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:06.798 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:06.798 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:06.798 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.798 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.057 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.057 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.057 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.057 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:07.057 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:07.057 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:07.057 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:07.316 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:07.316 "name": "pt2", 00:19:07.316 "aliases": [ 00:19:07.316 "00000000-0000-0000-0000-000000000002" 00:19:07.316 ], 00:19:07.316 "product_name": "passthru", 00:19:07.316 "block_size": 512, 00:19:07.316 "num_blocks": 65536, 00:19:07.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.316 "assigned_rate_limits": { 00:19:07.316 "rw_ios_per_sec": 0, 00:19:07.316 "rw_mbytes_per_sec": 0, 00:19:07.316 "r_mbytes_per_sec": 0, 00:19:07.316 "w_mbytes_per_sec": 0 00:19:07.316 }, 00:19:07.316 "claimed": true, 00:19:07.316 "claim_type": "exclusive_write", 00:19:07.316 "zoned": false, 00:19:07.316 "supported_io_types": { 00:19:07.316 "read": true, 00:19:07.316 "write": true, 00:19:07.316 "unmap": true, 00:19:07.316 "flush": true, 00:19:07.316 "reset": true, 00:19:07.316 "nvme_admin": false, 00:19:07.316 "nvme_io": false, 00:19:07.316 "nvme_io_md": false, 00:19:07.316 "write_zeroes": true, 00:19:07.316 "zcopy": true, 00:19:07.316 "get_zone_info": false, 00:19:07.316 "zone_management": false, 00:19:07.316 "zone_append": false, 00:19:07.316 "compare": false, 00:19:07.316 "compare_and_write": false, 00:19:07.316 "abort": true, 00:19:07.316 "seek_hole": false, 00:19:07.316 "seek_data": false, 00:19:07.316 "copy": true, 00:19:07.316 "nvme_iov_md": false 00:19:07.316 }, 00:19:07.316 "memory_domains": [ 00:19:07.316 { 00:19:07.316 "dma_device_id": "system", 00:19:07.316 "dma_device_type": 1 00:19:07.316 }, 00:19:07.316 { 00:19:07.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.316 "dma_device_type": 2 00:19:07.316 } 00:19:07.316 ], 00:19:07.316 "driver_specific": { 00:19:07.316 "passthru": { 00:19:07.316 "name": "pt2", 00:19:07.316 "base_bdev_name": "malloc2" 00:19:07.316 } 00:19:07.316 } 00:19:07.316 }' 00:19:07.316 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.316 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.316 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.316 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.316 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.316 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:07.316 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.575 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.575 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.575 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.575 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.575 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:07.575 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:07.575 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:07.575 13:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:07.834 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:07.834 "name": "pt3", 00:19:07.834 "aliases": [ 00:19:07.834 "00000000-0000-0000-0000-000000000003" 00:19:07.834 ], 00:19:07.834 "product_name": "passthru", 00:19:07.834 "block_size": 512, 00:19:07.834 "num_blocks": 65536, 00:19:07.834 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:07.834 "assigned_rate_limits": { 00:19:07.834 "rw_ios_per_sec": 0, 00:19:07.834 "rw_mbytes_per_sec": 0, 00:19:07.834 "r_mbytes_per_sec": 0, 00:19:07.834 "w_mbytes_per_sec": 0 00:19:07.834 }, 00:19:07.834 "claimed": true, 00:19:07.834 "claim_type": "exclusive_write", 00:19:07.834 "zoned": false, 00:19:07.834 "supported_io_types": { 00:19:07.834 "read": true, 00:19:07.834 "write": true, 00:19:07.834 "unmap": true, 00:19:07.834 "flush": true, 00:19:07.834 "reset": true, 00:19:07.834 "nvme_admin": false, 00:19:07.834 "nvme_io": false, 00:19:07.834 "nvme_io_md": false, 00:19:07.834 "write_zeroes": true, 00:19:07.834 "zcopy": true, 00:19:07.834 "get_zone_info": false, 00:19:07.834 "zone_management": false, 00:19:07.834 "zone_append": false, 00:19:07.834 "compare": false, 00:19:07.834 "compare_and_write": false, 00:19:07.834 "abort": true, 00:19:07.834 "seek_hole": false, 00:19:07.834 "seek_data": false, 00:19:07.834 "copy": true, 00:19:07.834 "nvme_iov_md": false 00:19:07.834 }, 00:19:07.834 "memory_domains": [ 00:19:07.834 { 00:19:07.834 "dma_device_id": "system", 00:19:07.834 "dma_device_type": 1 00:19:07.834 }, 00:19:07.834 { 00:19:07.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.834 "dma_device_type": 2 00:19:07.834 } 00:19:07.834 ], 00:19:07.834 "driver_specific": { 00:19:07.834 "passthru": { 00:19:07.834 "name": "pt3", 00:19:07.834 "base_bdev_name": "malloc3" 00:19:07.834 } 00:19:07.834 } 00:19:07.834 }' 00:19:07.834 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.834 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.834 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.834 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.093 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.093 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:08.093 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.093 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.093 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:08.093 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.093 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.093 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.093 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:08.093 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:19:08.093 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:08.352 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:08.352 "name": "pt4", 00:19:08.352 "aliases": [ 00:19:08.352 "00000000-0000-0000-0000-000000000004" 00:19:08.352 ], 00:19:08.352 "product_name": "passthru", 00:19:08.352 "block_size": 512, 00:19:08.352 "num_blocks": 65536, 00:19:08.352 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:08.352 "assigned_rate_limits": { 00:19:08.352 "rw_ios_per_sec": 0, 00:19:08.352 "rw_mbytes_per_sec": 0, 00:19:08.352 "r_mbytes_per_sec": 0, 00:19:08.352 "w_mbytes_per_sec": 0 00:19:08.352 }, 00:19:08.352 "claimed": true, 00:19:08.352 "claim_type": "exclusive_write", 00:19:08.352 "zoned": false, 00:19:08.352 "supported_io_types": { 00:19:08.352 "read": true, 00:19:08.352 "write": true, 00:19:08.352 "unmap": true, 00:19:08.352 "flush": true, 00:19:08.352 "reset": true, 00:19:08.352 "nvme_admin": false, 00:19:08.352 "nvme_io": false, 00:19:08.352 "nvme_io_md": false, 00:19:08.352 "write_zeroes": true, 00:19:08.352 "zcopy": true, 00:19:08.352 "get_zone_info": false, 00:19:08.352 "zone_management": false, 00:19:08.352 "zone_append": false, 00:19:08.352 "compare": false, 00:19:08.352 "compare_and_write": false, 00:19:08.352 "abort": true, 00:19:08.352 "seek_hole": false, 00:19:08.352 "seek_data": false, 00:19:08.352 "copy": true, 00:19:08.352 "nvme_iov_md": false 00:19:08.352 }, 00:19:08.352 "memory_domains": [ 00:19:08.352 { 00:19:08.352 "dma_device_id": "system", 00:19:08.352 "dma_device_type": 1 00:19:08.352 }, 00:19:08.352 { 00:19:08.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.353 "dma_device_type": 2 00:19:08.353 } 00:19:08.353 ], 00:19:08.353 "driver_specific": { 00:19:08.353 "passthru": { 00:19:08.353 "name": "pt4", 00:19:08.353 "base_bdev_name": "malloc4" 00:19:08.353 } 00:19:08.353 } 00:19:08.353 }' 00:19:08.353 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.353 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.610 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:08.610 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.610 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.610 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:08.610 13:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.610 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.610 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:08.610 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.610 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.869 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.869 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:08.869 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:19:08.869 [2024-07-25 13:19:19.332388] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.869 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 3443fb72-2096-4488-a4ca-8d354db3b190 '!=' 3443fb72-2096-4488-a4ca-8d354db3b190 ']' 00:19:08.869 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:19:08.869 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:08.869 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:08.869 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 911910 00:19:08.869 13:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 911910 ']' 00:19:08.869 13:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 911910 00:19:08.869 13:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:19:09.129 13:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.129 13:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 911910 00:19:09.129 13:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:09.129 13:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:09.129 13:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 911910' 00:19:09.129 killing process with pid 911910 00:19:09.129 13:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 911910 00:19:09.129 [2024-07-25 13:19:19.412203] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:09.129 [2024-07-25 13:19:19.412254] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.129 [2024-07-25 13:19:19.412310] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.129 [2024-07-25 13:19:19.412320] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x204f8e0 name raid_bdev1, state offline 00:19:09.129 13:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 911910 00:19:09.129 [2024-07-25 13:19:19.443359] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:09.390 13:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:19:09.390 00:19:09.390 real 0m15.568s 00:19:09.390 user 0m28.082s 00:19:09.390 sys 0m2.850s 00:19:09.390 13:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:09.390 13:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.390 ************************************ 00:19:09.390 END TEST raid_superblock_test 00:19:09.390 ************************************ 00:19:09.390 13:19:19 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:19:09.390 13:19:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:09.390 13:19:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:09.390 13:19:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.390 ************************************ 00:19:09.390 START TEST raid_read_error_test 00:19:09.390 ************************************ 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.NnElNf4aUc 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=914885 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 914885 /var/tmp/spdk-raid.sock 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 914885 ']' 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:09.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:09.390 13:19:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.390 [2024-07-25 13:19:19.797939] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:19:09.390 [2024-07-25 13:19:19.797999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914885 ] 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.390 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:09.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:09.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:09.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:09.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:09.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:09.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:09.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:09.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:09.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:09.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:09.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:09.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:09.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:09.391 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:09.650 [2024-07-25 13:19:19.928482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.650 [2024-07-25 13:19:20.016035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.650 [2024-07-25 13:19:20.075749] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.651 [2024-07-25 13:19:20.075792] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.588 13:19:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.588 13:19:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:10.588 13:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:10.588 13:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:10.588 BaseBdev1_malloc 00:19:10.588 13:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:10.847 true 00:19:10.847 13:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:11.105 [2024-07-25 13:19:21.385316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:11.105 [2024-07-25 13:19:21.385356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.105 [2024-07-25 13:19:21.385378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23e71d0 00:19:11.105 [2024-07-25 13:19:21.385389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.105 [2024-07-25 13:19:21.386926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.105 [2024-07-25 13:19:21.386952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:11.105 BaseBdev1 00:19:11.105 13:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:11.105 13:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:11.364 BaseBdev2_malloc 00:19:11.364 13:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:11.622 true 00:19:11.622 13:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:11.622 [2024-07-25 13:19:22.071482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:11.622 [2024-07-25 13:19:22.071520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.622 [2024-07-25 13:19:22.071538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23ea710 00:19:11.622 [2024-07-25 13:19:22.071550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.622 [2024-07-25 13:19:22.072908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.622 [2024-07-25 13:19:22.072935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:11.622 BaseBdev2 00:19:11.622 13:19:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:11.622 13:19:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:11.881 BaseBdev3_malloc 00:19:11.881 13:19:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:12.140 true 00:19:12.140 13:19:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:12.398 [2024-07-25 13:19:22.769649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:12.398 [2024-07-25 13:19:22.769689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.398 [2024-07-25 13:19:22.769710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23ecde0 00:19:12.398 [2024-07-25 13:19:22.769721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.398 [2024-07-25 13:19:22.771121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.398 [2024-07-25 13:19:22.771157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:12.398 BaseBdev3 00:19:12.398 13:19:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:12.398 13:19:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:12.657 BaseBdev4_malloc 00:19:12.657 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:19:12.915 true 00:19:12.915 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:13.173 [2024-07-25 13:19:23.455634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:13.173 [2024-07-25 13:19:23.455677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.174 [2024-07-25 13:19:23.455697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23ef130 00:19:13.174 [2024-07-25 13:19:23.455709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.174 [2024-07-25 13:19:23.457090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.174 [2024-07-25 13:19:23.457116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:13.174 BaseBdev4 00:19:13.174 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:19:13.432 [2024-07-25 13:19:23.672244] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.432 [2024-07-25 13:19:23.673421] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:13.432 [2024-07-25 13:19:23.673484] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.432 [2024-07-25 13:19:23.673536] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:13.432 [2024-07-25 13:19:23.673735] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x23f1790 00:19:13.432 [2024-07-25 13:19:23.673746] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:13.432 [2024-07-25 13:19:23.673930] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23f48a0 00:19:13.432 [2024-07-25 13:19:23.674060] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23f1790 00:19:13.432 [2024-07-25 13:19:23.674069] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23f1790 00:19:13.432 [2024-07-25 13:19:23.674187] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.432 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:13.432 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:13.432 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:13.432 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:13.432 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:13.432 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:13.432 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.432 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.432 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.432 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.432 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.432 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.690 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.691 "name": "raid_bdev1", 00:19:13.691 "uuid": "5a4144f1-4aef-4d94-90dc-ae79238a6c8d", 00:19:13.691 "strip_size_kb": 64, 00:19:13.691 "state": "online", 00:19:13.691 "raid_level": "raid0", 00:19:13.691 "superblock": true, 00:19:13.691 "num_base_bdevs": 4, 00:19:13.691 "num_base_bdevs_discovered": 4, 00:19:13.691 "num_base_bdevs_operational": 4, 00:19:13.691 "base_bdevs_list": [ 00:19:13.691 { 00:19:13.691 "name": "BaseBdev1", 00:19:13.691 "uuid": "22625a05-d42a-57dd-8fbd-7fcd65990f23", 00:19:13.691 "is_configured": true, 00:19:13.691 "data_offset": 2048, 00:19:13.691 "data_size": 63488 00:19:13.691 }, 00:19:13.691 { 00:19:13.691 "name": "BaseBdev2", 00:19:13.691 "uuid": "00a4f9e6-a580-5454-94a3-a603215eaafd", 00:19:13.691 "is_configured": true, 00:19:13.691 "data_offset": 2048, 00:19:13.691 "data_size": 63488 00:19:13.691 }, 00:19:13.691 { 00:19:13.691 "name": "BaseBdev3", 00:19:13.691 "uuid": "2d66b25f-7880-5069-8fab-77dea076a7eb", 00:19:13.691 "is_configured": true, 00:19:13.691 "data_offset": 2048, 00:19:13.691 "data_size": 63488 00:19:13.691 }, 00:19:13.691 { 00:19:13.691 "name": "BaseBdev4", 00:19:13.691 "uuid": "7f8aa6bb-d564-57b7-a124-6fbaa5608bdd", 00:19:13.691 "is_configured": true, 00:19:13.691 "data_offset": 2048, 00:19:13.691 "data_size": 63488 00:19:13.691 } 00:19:13.691 ] 00:19:13.691 }' 00:19:13.691 13:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.691 13:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.259 13:19:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:19:14.259 13:19:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:14.259 [2024-07-25 13:19:24.603065] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23f0f50 00:19:15.195 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.455 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.714 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.714 "name": "raid_bdev1", 00:19:15.714 "uuid": "5a4144f1-4aef-4d94-90dc-ae79238a6c8d", 00:19:15.714 "strip_size_kb": 64, 00:19:15.714 "state": "online", 00:19:15.714 "raid_level": "raid0", 00:19:15.714 "superblock": true, 00:19:15.714 "num_base_bdevs": 4, 00:19:15.714 "num_base_bdevs_discovered": 4, 00:19:15.714 "num_base_bdevs_operational": 4, 00:19:15.714 "base_bdevs_list": [ 00:19:15.714 { 00:19:15.714 "name": "BaseBdev1", 00:19:15.714 "uuid": "22625a05-d42a-57dd-8fbd-7fcd65990f23", 00:19:15.714 "is_configured": true, 00:19:15.714 "data_offset": 2048, 00:19:15.714 "data_size": 63488 00:19:15.714 }, 00:19:15.714 { 00:19:15.714 "name": "BaseBdev2", 00:19:15.714 "uuid": "00a4f9e6-a580-5454-94a3-a603215eaafd", 00:19:15.714 "is_configured": true, 00:19:15.714 "data_offset": 2048, 00:19:15.714 "data_size": 63488 00:19:15.714 }, 00:19:15.714 { 00:19:15.714 "name": "BaseBdev3", 00:19:15.714 "uuid": "2d66b25f-7880-5069-8fab-77dea076a7eb", 00:19:15.714 "is_configured": true, 00:19:15.714 "data_offset": 2048, 00:19:15.714 "data_size": 63488 00:19:15.714 }, 00:19:15.714 { 00:19:15.714 "name": "BaseBdev4", 00:19:15.714 "uuid": "7f8aa6bb-d564-57b7-a124-6fbaa5608bdd", 00:19:15.714 "is_configured": true, 00:19:15.714 "data_offset": 2048, 00:19:15.714 "data_size": 63488 00:19:15.714 } 00:19:15.714 ] 00:19:15.714 }' 00:19:15.714 13:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.714 13:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.283 13:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:16.283 [2024-07-25 13:19:26.770259] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.283 [2024-07-25 13:19:26.770290] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.542 [2024-07-25 13:19:26.773194] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.542 [2024-07-25 13:19:26.773228] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.542 [2024-07-25 13:19:26.773265] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.542 [2024-07-25 13:19:26.773275] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23f1790 name raid_bdev1, state offline 00:19:16.542 0 00:19:16.542 13:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 914885 00:19:16.542 13:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 914885 ']' 00:19:16.542 13:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 914885 00:19:16.542 13:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:19:16.542 13:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.542 13:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 914885 00:19:16.542 13:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:16.542 13:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:16.542 13:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 914885' 00:19:16.542 killing process with pid 914885 00:19:16.542 13:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 914885 00:19:16.542 [2024-07-25 13:19:26.844958] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:16.542 13:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 914885 00:19:16.542 [2024-07-25 13:19:26.871840] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:16.810 13:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:19:16.810 13:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.NnElNf4aUc 00:19:16.810 13:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:19:16.810 13:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:19:16.810 13:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:19:16.810 13:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:16.810 13:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:16.810 13:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:19:16.810 00:19:16.810 real 0m7.357s 00:19:16.810 user 0m11.704s 00:19:16.810 sys 0m1.322s 00:19:16.810 13:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:16.810 13:19:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.810 ************************************ 00:19:16.810 END TEST raid_read_error_test 00:19:16.810 ************************************ 00:19:16.810 13:19:27 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:19:16.810 13:19:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:16.810 13:19:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:16.810 13:19:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.810 ************************************ 00:19:16.810 START TEST raid_write_error_test 00:19:16.810 ************************************ 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:16.810 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.pEnS0LP3qj 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=916265 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 916265 /var/tmp/spdk-raid.sock 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 916265 ']' 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:16.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:16.811 13:19:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.811 [2024-07-25 13:19:27.242788] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:19:16.811 [2024-07-25 13:19:27.242850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid916265 ] 00:19:17.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.110 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:17.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.110 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:17.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.110 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:17.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.110 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:17.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.110 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:17.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.110 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:17.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.110 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:17.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.110 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:17.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.110 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:17.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.110 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:17.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.110 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:17.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:17.111 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:17.111 [2024-07-25 13:19:27.375211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.111 [2024-07-25 13:19:27.461652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.111 [2024-07-25 13:19:27.519577] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.111 [2024-07-25 13:19:27.519609] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.682 13:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:17.682 13:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:17.682 13:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:17.682 13:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:17.942 BaseBdev1_malloc 00:19:17.942 13:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:18.201 true 00:19:18.201 13:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:18.460 [2024-07-25 13:19:28.812972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:18.460 [2024-07-25 13:19:28.813012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.460 [2024-07-25 13:19:28.813029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfc81d0 00:19:18.460 [2024-07-25 13:19:28.813041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.460 [2024-07-25 13:19:28.814626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.460 [2024-07-25 13:19:28.814652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:18.460 BaseBdev1 00:19:18.460 13:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:18.460 13:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:18.719 BaseBdev2_malloc 00:19:18.719 13:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:18.978 true 00:19:18.978 13:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:19.238 [2024-07-25 13:19:29.499157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:19.238 [2024-07-25 13:19:29.499196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.238 [2024-07-25 13:19:29.499213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfcb710 00:19:19.238 [2024-07-25 13:19:29.499224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.238 [2024-07-25 13:19:29.500593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.238 [2024-07-25 13:19:29.500620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:19.238 BaseBdev2 00:19:19.238 13:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:19.238 13:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:19.497 BaseBdev3_malloc 00:19:19.497 13:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:19.497 true 00:19:19.497 13:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:19.756 [2024-07-25 13:19:30.189269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:19.756 [2024-07-25 13:19:30.189310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.756 [2024-07-25 13:19:30.189330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfcdde0 00:19:19.756 [2024-07-25 13:19:30.189341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.756 [2024-07-25 13:19:30.190789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.756 [2024-07-25 13:19:30.190814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:19.756 BaseBdev3 00:19:19.756 13:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:19.756 13:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:20.015 BaseBdev4_malloc 00:19:20.015 13:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:19:20.274 true 00:19:20.274 13:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:20.532 [2024-07-25 13:19:30.875268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:20.532 [2024-07-25 13:19:30.875306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.532 [2024-07-25 13:19:30.875327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfd0130 00:19:20.532 [2024-07-25 13:19:30.875338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.532 [2024-07-25 13:19:30.876719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.532 [2024-07-25 13:19:30.876745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:20.532 BaseBdev4 00:19:20.532 13:19:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:19:20.791 [2024-07-25 13:19:31.103896] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.791 [2024-07-25 13:19:31.105060] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.791 [2024-07-25 13:19:31.105132] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:20.791 [2024-07-25 13:19:31.105192] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:20.791 [2024-07-25 13:19:31.105390] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xfd2790 00:19:20.791 [2024-07-25 13:19:31.105401] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:20.791 [2024-07-25 13:19:31.105583] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfd58a0 00:19:20.791 [2024-07-25 13:19:31.105712] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfd2790 00:19:20.791 [2024-07-25 13:19:31.105721] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xfd2790 00:19:20.791 [2024-07-25 13:19:31.105829] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.791 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:20.791 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:20.791 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:20.791 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:20.791 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:20.791 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:20.791 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.791 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.791 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.791 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.791 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.791 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.050 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.050 "name": "raid_bdev1", 00:19:21.050 "uuid": "b243b573-cf09-4757-a514-15be635d5b36", 00:19:21.050 "strip_size_kb": 64, 00:19:21.050 "state": "online", 00:19:21.050 "raid_level": "raid0", 00:19:21.050 "superblock": true, 00:19:21.050 "num_base_bdevs": 4, 00:19:21.050 "num_base_bdevs_discovered": 4, 00:19:21.050 "num_base_bdevs_operational": 4, 00:19:21.050 "base_bdevs_list": [ 00:19:21.050 { 00:19:21.050 "name": "BaseBdev1", 00:19:21.050 "uuid": "bda68115-8504-5511-a1ee-df478c4a83d6", 00:19:21.050 "is_configured": true, 00:19:21.050 "data_offset": 2048, 00:19:21.050 "data_size": 63488 00:19:21.050 }, 00:19:21.050 { 00:19:21.050 "name": "BaseBdev2", 00:19:21.050 "uuid": "ce076f81-a73a-5645-b55b-89782418b56b", 00:19:21.050 "is_configured": true, 00:19:21.050 "data_offset": 2048, 00:19:21.050 "data_size": 63488 00:19:21.050 }, 00:19:21.050 { 00:19:21.050 "name": "BaseBdev3", 00:19:21.050 "uuid": "fb09b4b5-d7b6-5dd9-b233-b41c597e097a", 00:19:21.050 "is_configured": true, 00:19:21.050 "data_offset": 2048, 00:19:21.050 "data_size": 63488 00:19:21.050 }, 00:19:21.050 { 00:19:21.050 "name": "BaseBdev4", 00:19:21.050 "uuid": "4dd946f5-86ad-5387-88e0-d43701cd2822", 00:19:21.050 "is_configured": true, 00:19:21.050 "data_offset": 2048, 00:19:21.050 "data_size": 63488 00:19:21.050 } 00:19:21.050 ] 00:19:21.050 }' 00:19:21.051 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.051 13:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.618 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:19:21.618 13:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:21.618 [2024-07-25 13:19:32.030578] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfd1f50 00:19:22.556 13:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.816 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.074 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:23.074 "name": "raid_bdev1", 00:19:23.074 "uuid": "b243b573-cf09-4757-a514-15be635d5b36", 00:19:23.074 "strip_size_kb": 64, 00:19:23.074 "state": "online", 00:19:23.074 "raid_level": "raid0", 00:19:23.074 "superblock": true, 00:19:23.074 "num_base_bdevs": 4, 00:19:23.074 "num_base_bdevs_discovered": 4, 00:19:23.074 "num_base_bdevs_operational": 4, 00:19:23.074 "base_bdevs_list": [ 00:19:23.074 { 00:19:23.074 "name": "BaseBdev1", 00:19:23.074 "uuid": "bda68115-8504-5511-a1ee-df478c4a83d6", 00:19:23.074 "is_configured": true, 00:19:23.074 "data_offset": 2048, 00:19:23.074 "data_size": 63488 00:19:23.074 }, 00:19:23.074 { 00:19:23.074 "name": "BaseBdev2", 00:19:23.074 "uuid": "ce076f81-a73a-5645-b55b-89782418b56b", 00:19:23.074 "is_configured": true, 00:19:23.074 "data_offset": 2048, 00:19:23.074 "data_size": 63488 00:19:23.074 }, 00:19:23.074 { 00:19:23.074 "name": "BaseBdev3", 00:19:23.074 "uuid": "fb09b4b5-d7b6-5dd9-b233-b41c597e097a", 00:19:23.074 "is_configured": true, 00:19:23.074 "data_offset": 2048, 00:19:23.074 "data_size": 63488 00:19:23.074 }, 00:19:23.074 { 00:19:23.074 "name": "BaseBdev4", 00:19:23.074 "uuid": "4dd946f5-86ad-5387-88e0-d43701cd2822", 00:19:23.074 "is_configured": true, 00:19:23.074 "data_offset": 2048, 00:19:23.074 "data_size": 63488 00:19:23.074 } 00:19:23.074 ] 00:19:23.074 }' 00:19:23.074 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:23.074 13:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.642 13:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:23.901 [2024-07-25 13:19:34.173774] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.901 [2024-07-25 13:19:34.173808] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.901 [2024-07-25 13:19:34.176818] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.901 [2024-07-25 13:19:34.176853] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.901 [2024-07-25 13:19:34.176888] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.901 [2024-07-25 13:19:34.176898] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfd2790 name raid_bdev1, state offline 00:19:23.901 0 00:19:23.901 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 916265 00:19:23.901 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 916265 ']' 00:19:23.901 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 916265 00:19:23.901 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:19:23.901 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:23.901 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 916265 00:19:23.901 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:23.901 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:23.901 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 916265' 00:19:23.901 killing process with pid 916265 00:19:23.901 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 916265 00:19:23.901 [2024-07-25 13:19:34.250028] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:23.901 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 916265 00:19:23.901 [2024-07-25 13:19:34.275805] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:24.161 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.pEnS0LP3qj 00:19:24.161 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:19:24.161 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:19:24.161 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:19:24.161 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:19:24.161 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:24.161 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:24.161 13:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:19:24.161 00:19:24.161 real 0m7.316s 00:19:24.161 user 0m11.607s 00:19:24.161 sys 0m1.354s 00:19:24.161 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:24.161 13:19:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.161 ************************************ 00:19:24.161 END TEST raid_write_error_test 00:19:24.161 ************************************ 00:19:24.161 13:19:34 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:19:24.161 13:19:34 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:19:24.161 13:19:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:24.161 13:19:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:24.161 13:19:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:24.161 ************************************ 00:19:24.161 START TEST raid_state_function_test 00:19:24.161 ************************************ 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:24.161 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:24.162 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:24.162 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=917472 00:19:24.162 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 917472' 00:19:24.162 Process raid pid: 917472 00:19:24.162 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:24.162 13:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 917472 /var/tmp/spdk-raid.sock 00:19:24.162 13:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 917472 ']' 00:19:24.162 13:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:24.162 13:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:24.162 13:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:24.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:24.162 13:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:24.162 13:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.162 [2024-07-25 13:19:34.635358] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:19:24.162 [2024-07-25 13:19:34.635420] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:24.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:24.422 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:24.422 [2024-07-25 13:19:34.768827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.422 [2024-07-25 13:19:34.851899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.681 [2024-07-25 13:19:34.916398] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.681 [2024-07-25 13:19:34.916431] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.249 13:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.249 13:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:19:25.249 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:25.509 [2024-07-25 13:19:35.739780] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:25.509 [2024-07-25 13:19:35.739818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:25.509 [2024-07-25 13:19:35.739828] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:25.509 [2024-07-25 13:19:35.739839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:25.509 [2024-07-25 13:19:35.739847] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:25.509 [2024-07-25 13:19:35.739857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:25.509 [2024-07-25 13:19:35.739865] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:25.509 [2024-07-25 13:19:35.739878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.509 "name": "Existed_Raid", 00:19:25.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.509 "strip_size_kb": 64, 00:19:25.509 "state": "configuring", 00:19:25.509 "raid_level": "concat", 00:19:25.509 "superblock": false, 00:19:25.509 "num_base_bdevs": 4, 00:19:25.509 "num_base_bdevs_discovered": 0, 00:19:25.509 "num_base_bdevs_operational": 4, 00:19:25.509 "base_bdevs_list": [ 00:19:25.509 { 00:19:25.509 "name": "BaseBdev1", 00:19:25.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.509 "is_configured": false, 00:19:25.509 "data_offset": 0, 00:19:25.509 "data_size": 0 00:19:25.509 }, 00:19:25.509 { 00:19:25.509 "name": "BaseBdev2", 00:19:25.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.509 "is_configured": false, 00:19:25.509 "data_offset": 0, 00:19:25.509 "data_size": 0 00:19:25.509 }, 00:19:25.509 { 00:19:25.509 "name": "BaseBdev3", 00:19:25.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.509 "is_configured": false, 00:19:25.509 "data_offset": 0, 00:19:25.509 "data_size": 0 00:19:25.509 }, 00:19:25.509 { 00:19:25.509 "name": "BaseBdev4", 00:19:25.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.509 "is_configured": false, 00:19:25.509 "data_offset": 0, 00:19:25.509 "data_size": 0 00:19:25.509 } 00:19:25.509 ] 00:19:25.509 }' 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.509 13:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.078 13:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:26.337 [2024-07-25 13:19:36.734265] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:26.337 [2024-07-25 13:19:36.734298] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25d0f60 name Existed_Raid, state configuring 00:19:26.337 13:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:26.596 [2024-07-25 13:19:36.962883] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:26.596 [2024-07-25 13:19:36.962911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:26.596 [2024-07-25 13:19:36.962920] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.596 [2024-07-25 13:19:36.962930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.596 [2024-07-25 13:19:36.962938] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:26.596 [2024-07-25 13:19:36.962948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:26.596 [2024-07-25 13:19:36.962956] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:26.596 [2024-07-25 13:19:36.962966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:26.596 13:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:26.856 [2024-07-25 13:19:37.200911] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.856 BaseBdev1 00:19:26.856 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:26.856 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:26.856 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:26.856 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:26.856 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:26.856 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:26.856 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:27.115 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:27.375 [ 00:19:27.375 { 00:19:27.375 "name": "BaseBdev1", 00:19:27.375 "aliases": [ 00:19:27.375 "9ff48a10-8b84-493f-9cf6-398f26d90731" 00:19:27.375 ], 00:19:27.375 "product_name": "Malloc disk", 00:19:27.375 "block_size": 512, 00:19:27.375 "num_blocks": 65536, 00:19:27.375 "uuid": "9ff48a10-8b84-493f-9cf6-398f26d90731", 00:19:27.375 "assigned_rate_limits": { 00:19:27.375 "rw_ios_per_sec": 0, 00:19:27.375 "rw_mbytes_per_sec": 0, 00:19:27.375 "r_mbytes_per_sec": 0, 00:19:27.375 "w_mbytes_per_sec": 0 00:19:27.375 }, 00:19:27.375 "claimed": true, 00:19:27.375 "claim_type": "exclusive_write", 00:19:27.375 "zoned": false, 00:19:27.375 "supported_io_types": { 00:19:27.375 "read": true, 00:19:27.375 "write": true, 00:19:27.375 "unmap": true, 00:19:27.375 "flush": true, 00:19:27.375 "reset": true, 00:19:27.375 "nvme_admin": false, 00:19:27.375 "nvme_io": false, 00:19:27.375 "nvme_io_md": false, 00:19:27.375 "write_zeroes": true, 00:19:27.375 "zcopy": true, 00:19:27.375 "get_zone_info": false, 00:19:27.375 "zone_management": false, 00:19:27.375 "zone_append": false, 00:19:27.375 "compare": false, 00:19:27.375 "compare_and_write": false, 00:19:27.375 "abort": true, 00:19:27.375 "seek_hole": false, 00:19:27.375 "seek_data": false, 00:19:27.375 "copy": true, 00:19:27.375 "nvme_iov_md": false 00:19:27.375 }, 00:19:27.375 "memory_domains": [ 00:19:27.375 { 00:19:27.375 "dma_device_id": "system", 00:19:27.375 "dma_device_type": 1 00:19:27.375 }, 00:19:27.375 { 00:19:27.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.375 "dma_device_type": 2 00:19:27.375 } 00:19:27.375 ], 00:19:27.375 "driver_specific": {} 00:19:27.375 } 00:19:27.375 ] 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.375 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.635 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.635 "name": "Existed_Raid", 00:19:27.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.635 "strip_size_kb": 64, 00:19:27.635 "state": "configuring", 00:19:27.635 "raid_level": "concat", 00:19:27.635 "superblock": false, 00:19:27.635 "num_base_bdevs": 4, 00:19:27.635 "num_base_bdevs_discovered": 1, 00:19:27.635 "num_base_bdevs_operational": 4, 00:19:27.635 "base_bdevs_list": [ 00:19:27.635 { 00:19:27.635 "name": "BaseBdev1", 00:19:27.635 "uuid": "9ff48a10-8b84-493f-9cf6-398f26d90731", 00:19:27.635 "is_configured": true, 00:19:27.635 "data_offset": 0, 00:19:27.635 "data_size": 65536 00:19:27.635 }, 00:19:27.635 { 00:19:27.635 "name": "BaseBdev2", 00:19:27.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.635 "is_configured": false, 00:19:27.635 "data_offset": 0, 00:19:27.635 "data_size": 0 00:19:27.635 }, 00:19:27.635 { 00:19:27.635 "name": "BaseBdev3", 00:19:27.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.635 "is_configured": false, 00:19:27.635 "data_offset": 0, 00:19:27.635 "data_size": 0 00:19:27.635 }, 00:19:27.635 { 00:19:27.635 "name": "BaseBdev4", 00:19:27.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.635 "is_configured": false, 00:19:27.635 "data_offset": 0, 00:19:27.635 "data_size": 0 00:19:27.635 } 00:19:27.635 ] 00:19:27.635 }' 00:19:27.635 13:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.635 13:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.203 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:28.203 [2024-07-25 13:19:38.660762] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:28.203 [2024-07-25 13:19:38.660798] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25d07d0 name Existed_Raid, state configuring 00:19:28.203 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:28.462 [2024-07-25 13:19:38.889390] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.462 [2024-07-25 13:19:38.890798] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.462 [2024-07-25 13:19:38.890829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.462 [2024-07-25 13:19:38.890838] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:28.462 [2024-07-25 13:19:38.890849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:28.462 [2024-07-25 13:19:38.890857] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:28.462 [2024-07-25 13:19:38.890867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.462 13:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.721 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:28.721 "name": "Existed_Raid", 00:19:28.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.721 "strip_size_kb": 64, 00:19:28.721 "state": "configuring", 00:19:28.721 "raid_level": "concat", 00:19:28.721 "superblock": false, 00:19:28.721 "num_base_bdevs": 4, 00:19:28.721 "num_base_bdevs_discovered": 1, 00:19:28.721 "num_base_bdevs_operational": 4, 00:19:28.721 "base_bdevs_list": [ 00:19:28.721 { 00:19:28.721 "name": "BaseBdev1", 00:19:28.721 "uuid": "9ff48a10-8b84-493f-9cf6-398f26d90731", 00:19:28.721 "is_configured": true, 00:19:28.721 "data_offset": 0, 00:19:28.721 "data_size": 65536 00:19:28.721 }, 00:19:28.721 { 00:19:28.721 "name": "BaseBdev2", 00:19:28.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.721 "is_configured": false, 00:19:28.721 "data_offset": 0, 00:19:28.721 "data_size": 0 00:19:28.721 }, 00:19:28.722 { 00:19:28.722 "name": "BaseBdev3", 00:19:28.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.722 "is_configured": false, 00:19:28.722 "data_offset": 0, 00:19:28.722 "data_size": 0 00:19:28.722 }, 00:19:28.722 { 00:19:28.722 "name": "BaseBdev4", 00:19:28.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.722 "is_configured": false, 00:19:28.722 "data_offset": 0, 00:19:28.722 "data_size": 0 00:19:28.722 } 00:19:28.722 ] 00:19:28.722 }' 00:19:28.722 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:28.722 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.289 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:29.549 [2024-07-25 13:19:39.927467] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:29.549 BaseBdev2 00:19:29.549 13:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:29.549 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:29.549 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:29.549 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:29.549 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:29.549 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:29.549 13:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:29.808 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:30.099 [ 00:19:30.099 { 00:19:30.099 "name": "BaseBdev2", 00:19:30.099 "aliases": [ 00:19:30.099 "51568a57-77e6-4123-91c3-ef54a180f69a" 00:19:30.099 ], 00:19:30.099 "product_name": "Malloc disk", 00:19:30.099 "block_size": 512, 00:19:30.099 "num_blocks": 65536, 00:19:30.099 "uuid": "51568a57-77e6-4123-91c3-ef54a180f69a", 00:19:30.099 "assigned_rate_limits": { 00:19:30.099 "rw_ios_per_sec": 0, 00:19:30.099 "rw_mbytes_per_sec": 0, 00:19:30.099 "r_mbytes_per_sec": 0, 00:19:30.099 "w_mbytes_per_sec": 0 00:19:30.099 }, 00:19:30.099 "claimed": true, 00:19:30.099 "claim_type": "exclusive_write", 00:19:30.099 "zoned": false, 00:19:30.099 "supported_io_types": { 00:19:30.099 "read": true, 00:19:30.099 "write": true, 00:19:30.099 "unmap": true, 00:19:30.099 "flush": true, 00:19:30.099 "reset": true, 00:19:30.099 "nvme_admin": false, 00:19:30.099 "nvme_io": false, 00:19:30.099 "nvme_io_md": false, 00:19:30.099 "write_zeroes": true, 00:19:30.099 "zcopy": true, 00:19:30.099 "get_zone_info": false, 00:19:30.099 "zone_management": false, 00:19:30.099 "zone_append": false, 00:19:30.099 "compare": false, 00:19:30.099 "compare_and_write": false, 00:19:30.099 "abort": true, 00:19:30.099 "seek_hole": false, 00:19:30.099 "seek_data": false, 00:19:30.099 "copy": true, 00:19:30.099 "nvme_iov_md": false 00:19:30.099 }, 00:19:30.099 "memory_domains": [ 00:19:30.099 { 00:19:30.099 "dma_device_id": "system", 00:19:30.099 "dma_device_type": 1 00:19:30.099 }, 00:19:30.099 { 00:19:30.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.099 "dma_device_type": 2 00:19:30.099 } 00:19:30.099 ], 00:19:30.099 "driver_specific": {} 00:19:30.099 } 00:19:30.099 ] 00:19:30.099 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:30.099 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.100 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.359 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.359 "name": "Existed_Raid", 00:19:30.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.359 "strip_size_kb": 64, 00:19:30.359 "state": "configuring", 00:19:30.359 "raid_level": "concat", 00:19:30.359 "superblock": false, 00:19:30.359 "num_base_bdevs": 4, 00:19:30.359 "num_base_bdevs_discovered": 2, 00:19:30.359 "num_base_bdevs_operational": 4, 00:19:30.359 "base_bdevs_list": [ 00:19:30.359 { 00:19:30.359 "name": "BaseBdev1", 00:19:30.359 "uuid": "9ff48a10-8b84-493f-9cf6-398f26d90731", 00:19:30.359 "is_configured": true, 00:19:30.359 "data_offset": 0, 00:19:30.359 "data_size": 65536 00:19:30.359 }, 00:19:30.359 { 00:19:30.359 "name": "BaseBdev2", 00:19:30.359 "uuid": "51568a57-77e6-4123-91c3-ef54a180f69a", 00:19:30.359 "is_configured": true, 00:19:30.359 "data_offset": 0, 00:19:30.359 "data_size": 65536 00:19:30.359 }, 00:19:30.359 { 00:19:30.359 "name": "BaseBdev3", 00:19:30.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.359 "is_configured": false, 00:19:30.359 "data_offset": 0, 00:19:30.359 "data_size": 0 00:19:30.359 }, 00:19:30.359 { 00:19:30.359 "name": "BaseBdev4", 00:19:30.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.359 "is_configured": false, 00:19:30.359 "data_offset": 0, 00:19:30.359 "data_size": 0 00:19:30.359 } 00:19:30.359 ] 00:19:30.359 }' 00:19:30.359 13:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.359 13:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.928 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:31.186 [2024-07-25 13:19:41.422540] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.186 BaseBdev3 00:19:31.186 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:31.186 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:31.186 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:31.186 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:31.186 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:31.186 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:31.186 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:31.186 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:31.445 [ 00:19:31.446 { 00:19:31.446 "name": "BaseBdev3", 00:19:31.446 "aliases": [ 00:19:31.446 "5db7d2c1-1aa6-4b07-897b-55a06d1238f2" 00:19:31.446 ], 00:19:31.446 "product_name": "Malloc disk", 00:19:31.446 "block_size": 512, 00:19:31.446 "num_blocks": 65536, 00:19:31.446 "uuid": "5db7d2c1-1aa6-4b07-897b-55a06d1238f2", 00:19:31.446 "assigned_rate_limits": { 00:19:31.446 "rw_ios_per_sec": 0, 00:19:31.446 "rw_mbytes_per_sec": 0, 00:19:31.446 "r_mbytes_per_sec": 0, 00:19:31.446 "w_mbytes_per_sec": 0 00:19:31.446 }, 00:19:31.446 "claimed": true, 00:19:31.446 "claim_type": "exclusive_write", 00:19:31.446 "zoned": false, 00:19:31.446 "supported_io_types": { 00:19:31.446 "read": true, 00:19:31.446 "write": true, 00:19:31.446 "unmap": true, 00:19:31.446 "flush": true, 00:19:31.446 "reset": true, 00:19:31.446 "nvme_admin": false, 00:19:31.446 "nvme_io": false, 00:19:31.446 "nvme_io_md": false, 00:19:31.446 "write_zeroes": true, 00:19:31.446 "zcopy": true, 00:19:31.446 "get_zone_info": false, 00:19:31.446 "zone_management": false, 00:19:31.446 "zone_append": false, 00:19:31.446 "compare": false, 00:19:31.446 "compare_and_write": false, 00:19:31.446 "abort": true, 00:19:31.446 "seek_hole": false, 00:19:31.446 "seek_data": false, 00:19:31.446 "copy": true, 00:19:31.446 "nvme_iov_md": false 00:19:31.446 }, 00:19:31.446 "memory_domains": [ 00:19:31.446 { 00:19:31.446 "dma_device_id": "system", 00:19:31.446 "dma_device_type": 1 00:19:31.446 }, 00:19:31.446 { 00:19:31.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.446 "dma_device_type": 2 00:19:31.446 } 00:19:31.446 ], 00:19:31.446 "driver_specific": {} 00:19:31.446 } 00:19:31.446 ] 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.446 13:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.705 13:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.705 "name": "Existed_Raid", 00:19:31.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.705 "strip_size_kb": 64, 00:19:31.705 "state": "configuring", 00:19:31.705 "raid_level": "concat", 00:19:31.705 "superblock": false, 00:19:31.705 "num_base_bdevs": 4, 00:19:31.705 "num_base_bdevs_discovered": 3, 00:19:31.705 "num_base_bdevs_operational": 4, 00:19:31.705 "base_bdevs_list": [ 00:19:31.705 { 00:19:31.705 "name": "BaseBdev1", 00:19:31.705 "uuid": "9ff48a10-8b84-493f-9cf6-398f26d90731", 00:19:31.705 "is_configured": true, 00:19:31.705 "data_offset": 0, 00:19:31.705 "data_size": 65536 00:19:31.705 }, 00:19:31.705 { 00:19:31.705 "name": "BaseBdev2", 00:19:31.705 "uuid": "51568a57-77e6-4123-91c3-ef54a180f69a", 00:19:31.705 "is_configured": true, 00:19:31.705 "data_offset": 0, 00:19:31.705 "data_size": 65536 00:19:31.705 }, 00:19:31.705 { 00:19:31.705 "name": "BaseBdev3", 00:19:31.705 "uuid": "5db7d2c1-1aa6-4b07-897b-55a06d1238f2", 00:19:31.705 "is_configured": true, 00:19:31.705 "data_offset": 0, 00:19:31.705 "data_size": 65536 00:19:31.705 }, 00:19:31.705 { 00:19:31.705 "name": "BaseBdev4", 00:19:31.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.705 "is_configured": false, 00:19:31.705 "data_offset": 0, 00:19:31.705 "data_size": 0 00:19:31.705 } 00:19:31.705 ] 00:19:31.705 }' 00:19:31.705 13:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.705 13:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.274 13:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:32.533 [2024-07-25 13:19:42.937658] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:32.533 [2024-07-25 13:19:42.937692] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x25d1840 00:19:32.533 [2024-07-25 13:19:42.937700] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:32.533 [2024-07-25 13:19:42.937873] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25d1480 00:19:32.533 [2024-07-25 13:19:42.937987] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25d1840 00:19:32.533 [2024-07-25 13:19:42.937997] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x25d1840 00:19:32.533 [2024-07-25 13:19:42.938156] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.533 BaseBdev4 00:19:32.533 13:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:32.533 13:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:32.533 13:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:32.533 13:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:32.533 13:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:32.533 13:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:32.533 13:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:32.792 13:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:33.052 [ 00:19:33.052 { 00:19:33.052 "name": "BaseBdev4", 00:19:33.052 "aliases": [ 00:19:33.052 "f7dcbaa3-54b5-47cf-9659-acc317cff6bc" 00:19:33.052 ], 00:19:33.052 "product_name": "Malloc disk", 00:19:33.052 "block_size": 512, 00:19:33.052 "num_blocks": 65536, 00:19:33.052 "uuid": "f7dcbaa3-54b5-47cf-9659-acc317cff6bc", 00:19:33.052 "assigned_rate_limits": { 00:19:33.052 "rw_ios_per_sec": 0, 00:19:33.052 "rw_mbytes_per_sec": 0, 00:19:33.052 "r_mbytes_per_sec": 0, 00:19:33.052 "w_mbytes_per_sec": 0 00:19:33.052 }, 00:19:33.052 "claimed": true, 00:19:33.052 "claim_type": "exclusive_write", 00:19:33.052 "zoned": false, 00:19:33.052 "supported_io_types": { 00:19:33.052 "read": true, 00:19:33.052 "write": true, 00:19:33.052 "unmap": true, 00:19:33.052 "flush": true, 00:19:33.052 "reset": true, 00:19:33.052 "nvme_admin": false, 00:19:33.052 "nvme_io": false, 00:19:33.052 "nvme_io_md": false, 00:19:33.052 "write_zeroes": true, 00:19:33.052 "zcopy": true, 00:19:33.052 "get_zone_info": false, 00:19:33.052 "zone_management": false, 00:19:33.052 "zone_append": false, 00:19:33.052 "compare": false, 00:19:33.052 "compare_and_write": false, 00:19:33.052 "abort": true, 00:19:33.052 "seek_hole": false, 00:19:33.052 "seek_data": false, 00:19:33.052 "copy": true, 00:19:33.052 "nvme_iov_md": false 00:19:33.052 }, 00:19:33.052 "memory_domains": [ 00:19:33.052 { 00:19:33.052 "dma_device_id": "system", 00:19:33.052 "dma_device_type": 1 00:19:33.052 }, 00:19:33.052 { 00:19:33.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.052 "dma_device_type": 2 00:19:33.052 } 00:19:33.052 ], 00:19:33.052 "driver_specific": {} 00:19:33.052 } 00:19:33.052 ] 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.052 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.311 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:33.311 "name": "Existed_Raid", 00:19:33.311 "uuid": "69026393-2d62-43b4-a552-ef339a4d6444", 00:19:33.311 "strip_size_kb": 64, 00:19:33.311 "state": "online", 00:19:33.311 "raid_level": "concat", 00:19:33.311 "superblock": false, 00:19:33.311 "num_base_bdevs": 4, 00:19:33.311 "num_base_bdevs_discovered": 4, 00:19:33.311 "num_base_bdevs_operational": 4, 00:19:33.311 "base_bdevs_list": [ 00:19:33.311 { 00:19:33.311 "name": "BaseBdev1", 00:19:33.311 "uuid": "9ff48a10-8b84-493f-9cf6-398f26d90731", 00:19:33.311 "is_configured": true, 00:19:33.311 "data_offset": 0, 00:19:33.311 "data_size": 65536 00:19:33.311 }, 00:19:33.311 { 00:19:33.311 "name": "BaseBdev2", 00:19:33.311 "uuid": "51568a57-77e6-4123-91c3-ef54a180f69a", 00:19:33.311 "is_configured": true, 00:19:33.311 "data_offset": 0, 00:19:33.311 "data_size": 65536 00:19:33.311 }, 00:19:33.311 { 00:19:33.311 "name": "BaseBdev3", 00:19:33.311 "uuid": "5db7d2c1-1aa6-4b07-897b-55a06d1238f2", 00:19:33.311 "is_configured": true, 00:19:33.311 "data_offset": 0, 00:19:33.311 "data_size": 65536 00:19:33.311 }, 00:19:33.311 { 00:19:33.311 "name": "BaseBdev4", 00:19:33.311 "uuid": "f7dcbaa3-54b5-47cf-9659-acc317cff6bc", 00:19:33.311 "is_configured": true, 00:19:33.311 "data_offset": 0, 00:19:33.311 "data_size": 65536 00:19:33.311 } 00:19:33.311 ] 00:19:33.311 }' 00:19:33.311 13:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:33.311 13:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.879 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:33.879 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:33.879 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:33.879 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:33.879 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:33.879 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:33.879 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:33.879 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:34.139 [2024-07-25 13:19:44.421860] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.139 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:34.139 "name": "Existed_Raid", 00:19:34.139 "aliases": [ 00:19:34.139 "69026393-2d62-43b4-a552-ef339a4d6444" 00:19:34.139 ], 00:19:34.139 "product_name": "Raid Volume", 00:19:34.139 "block_size": 512, 00:19:34.139 "num_blocks": 262144, 00:19:34.139 "uuid": "69026393-2d62-43b4-a552-ef339a4d6444", 00:19:34.139 "assigned_rate_limits": { 00:19:34.139 "rw_ios_per_sec": 0, 00:19:34.139 "rw_mbytes_per_sec": 0, 00:19:34.139 "r_mbytes_per_sec": 0, 00:19:34.139 "w_mbytes_per_sec": 0 00:19:34.139 }, 00:19:34.139 "claimed": false, 00:19:34.139 "zoned": false, 00:19:34.139 "supported_io_types": { 00:19:34.139 "read": true, 00:19:34.139 "write": true, 00:19:34.139 "unmap": true, 00:19:34.139 "flush": true, 00:19:34.139 "reset": true, 00:19:34.139 "nvme_admin": false, 00:19:34.139 "nvme_io": false, 00:19:34.139 "nvme_io_md": false, 00:19:34.139 "write_zeroes": true, 00:19:34.139 "zcopy": false, 00:19:34.139 "get_zone_info": false, 00:19:34.139 "zone_management": false, 00:19:34.139 "zone_append": false, 00:19:34.139 "compare": false, 00:19:34.139 "compare_and_write": false, 00:19:34.139 "abort": false, 00:19:34.139 "seek_hole": false, 00:19:34.139 "seek_data": false, 00:19:34.139 "copy": false, 00:19:34.139 "nvme_iov_md": false 00:19:34.139 }, 00:19:34.139 "memory_domains": [ 00:19:34.139 { 00:19:34.139 "dma_device_id": "system", 00:19:34.139 "dma_device_type": 1 00:19:34.139 }, 00:19:34.139 { 00:19:34.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.139 "dma_device_type": 2 00:19:34.139 }, 00:19:34.139 { 00:19:34.139 "dma_device_id": "system", 00:19:34.139 "dma_device_type": 1 00:19:34.139 }, 00:19:34.139 { 00:19:34.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.139 "dma_device_type": 2 00:19:34.140 }, 00:19:34.140 { 00:19:34.140 "dma_device_id": "system", 00:19:34.140 "dma_device_type": 1 00:19:34.140 }, 00:19:34.140 { 00:19:34.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.140 "dma_device_type": 2 00:19:34.140 }, 00:19:34.140 { 00:19:34.140 "dma_device_id": "system", 00:19:34.140 "dma_device_type": 1 00:19:34.140 }, 00:19:34.140 { 00:19:34.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.140 "dma_device_type": 2 00:19:34.140 } 00:19:34.140 ], 00:19:34.140 "driver_specific": { 00:19:34.140 "raid": { 00:19:34.140 "uuid": "69026393-2d62-43b4-a552-ef339a4d6444", 00:19:34.140 "strip_size_kb": 64, 00:19:34.140 "state": "online", 00:19:34.140 "raid_level": "concat", 00:19:34.140 "superblock": false, 00:19:34.140 "num_base_bdevs": 4, 00:19:34.140 "num_base_bdevs_discovered": 4, 00:19:34.140 "num_base_bdevs_operational": 4, 00:19:34.140 "base_bdevs_list": [ 00:19:34.140 { 00:19:34.140 "name": "BaseBdev1", 00:19:34.140 "uuid": "9ff48a10-8b84-493f-9cf6-398f26d90731", 00:19:34.140 "is_configured": true, 00:19:34.140 "data_offset": 0, 00:19:34.140 "data_size": 65536 00:19:34.140 }, 00:19:34.140 { 00:19:34.140 "name": "BaseBdev2", 00:19:34.140 "uuid": "51568a57-77e6-4123-91c3-ef54a180f69a", 00:19:34.140 "is_configured": true, 00:19:34.140 "data_offset": 0, 00:19:34.140 "data_size": 65536 00:19:34.140 }, 00:19:34.140 { 00:19:34.140 "name": "BaseBdev3", 00:19:34.140 "uuid": "5db7d2c1-1aa6-4b07-897b-55a06d1238f2", 00:19:34.140 "is_configured": true, 00:19:34.140 "data_offset": 0, 00:19:34.140 "data_size": 65536 00:19:34.140 }, 00:19:34.140 { 00:19:34.140 "name": "BaseBdev4", 00:19:34.140 "uuid": "f7dcbaa3-54b5-47cf-9659-acc317cff6bc", 00:19:34.140 "is_configured": true, 00:19:34.140 "data_offset": 0, 00:19:34.140 "data_size": 65536 00:19:34.140 } 00:19:34.140 ] 00:19:34.140 } 00:19:34.140 } 00:19:34.140 }' 00:19:34.140 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:34.140 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:34.140 BaseBdev2 00:19:34.140 BaseBdev3 00:19:34.140 BaseBdev4' 00:19:34.140 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:34.140 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:34.140 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:34.399 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:34.399 "name": "BaseBdev1", 00:19:34.399 "aliases": [ 00:19:34.399 "9ff48a10-8b84-493f-9cf6-398f26d90731" 00:19:34.399 ], 00:19:34.399 "product_name": "Malloc disk", 00:19:34.399 "block_size": 512, 00:19:34.399 "num_blocks": 65536, 00:19:34.399 "uuid": "9ff48a10-8b84-493f-9cf6-398f26d90731", 00:19:34.399 "assigned_rate_limits": { 00:19:34.399 "rw_ios_per_sec": 0, 00:19:34.399 "rw_mbytes_per_sec": 0, 00:19:34.399 "r_mbytes_per_sec": 0, 00:19:34.399 "w_mbytes_per_sec": 0 00:19:34.399 }, 00:19:34.399 "claimed": true, 00:19:34.399 "claim_type": "exclusive_write", 00:19:34.399 "zoned": false, 00:19:34.399 "supported_io_types": { 00:19:34.399 "read": true, 00:19:34.399 "write": true, 00:19:34.399 "unmap": true, 00:19:34.399 "flush": true, 00:19:34.399 "reset": true, 00:19:34.399 "nvme_admin": false, 00:19:34.399 "nvme_io": false, 00:19:34.399 "nvme_io_md": false, 00:19:34.399 "write_zeroes": true, 00:19:34.399 "zcopy": true, 00:19:34.399 "get_zone_info": false, 00:19:34.399 "zone_management": false, 00:19:34.399 "zone_append": false, 00:19:34.399 "compare": false, 00:19:34.399 "compare_and_write": false, 00:19:34.399 "abort": true, 00:19:34.399 "seek_hole": false, 00:19:34.399 "seek_data": false, 00:19:34.399 "copy": true, 00:19:34.399 "nvme_iov_md": false 00:19:34.399 }, 00:19:34.399 "memory_domains": [ 00:19:34.399 { 00:19:34.399 "dma_device_id": "system", 00:19:34.399 "dma_device_type": 1 00:19:34.399 }, 00:19:34.399 { 00:19:34.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.399 "dma_device_type": 2 00:19:34.399 } 00:19:34.399 ], 00:19:34.399 "driver_specific": {} 00:19:34.399 }' 00:19:34.399 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:34.399 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:34.399 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:34.399 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:34.399 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:34.659 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:34.659 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:34.659 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:34.659 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:34.659 13:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.659 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.659 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:34.659 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:34.659 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:34.659 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:34.918 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:34.918 "name": "BaseBdev2", 00:19:34.918 "aliases": [ 00:19:34.918 "51568a57-77e6-4123-91c3-ef54a180f69a" 00:19:34.918 ], 00:19:34.918 "product_name": "Malloc disk", 00:19:34.918 "block_size": 512, 00:19:34.918 "num_blocks": 65536, 00:19:34.918 "uuid": "51568a57-77e6-4123-91c3-ef54a180f69a", 00:19:34.918 "assigned_rate_limits": { 00:19:34.918 "rw_ios_per_sec": 0, 00:19:34.918 "rw_mbytes_per_sec": 0, 00:19:34.918 "r_mbytes_per_sec": 0, 00:19:34.918 "w_mbytes_per_sec": 0 00:19:34.918 }, 00:19:34.918 "claimed": true, 00:19:34.918 "claim_type": "exclusive_write", 00:19:34.918 "zoned": false, 00:19:34.918 "supported_io_types": { 00:19:34.918 "read": true, 00:19:34.918 "write": true, 00:19:34.918 "unmap": true, 00:19:34.918 "flush": true, 00:19:34.918 "reset": true, 00:19:34.918 "nvme_admin": false, 00:19:34.918 "nvme_io": false, 00:19:34.918 "nvme_io_md": false, 00:19:34.918 "write_zeroes": true, 00:19:34.918 "zcopy": true, 00:19:34.918 "get_zone_info": false, 00:19:34.918 "zone_management": false, 00:19:34.918 "zone_append": false, 00:19:34.918 "compare": false, 00:19:34.918 "compare_and_write": false, 00:19:34.918 "abort": true, 00:19:34.918 "seek_hole": false, 00:19:34.918 "seek_data": false, 00:19:34.918 "copy": true, 00:19:34.918 "nvme_iov_md": false 00:19:34.918 }, 00:19:34.918 "memory_domains": [ 00:19:34.918 { 00:19:34.918 "dma_device_id": "system", 00:19:34.918 "dma_device_type": 1 00:19:34.918 }, 00:19:34.918 { 00:19:34.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.918 "dma_device_type": 2 00:19:34.918 } 00:19:34.918 ], 00:19:34.918 "driver_specific": {} 00:19:34.918 }' 00:19:34.918 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:34.918 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:34.918 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:34.918 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:35.177 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:35.177 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:35.177 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:35.177 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:35.177 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:35.177 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:35.177 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:35.177 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:35.177 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:35.177 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:35.177 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:35.436 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:35.436 "name": "BaseBdev3", 00:19:35.436 "aliases": [ 00:19:35.436 "5db7d2c1-1aa6-4b07-897b-55a06d1238f2" 00:19:35.436 ], 00:19:35.436 "product_name": "Malloc disk", 00:19:35.436 "block_size": 512, 00:19:35.436 "num_blocks": 65536, 00:19:35.436 "uuid": "5db7d2c1-1aa6-4b07-897b-55a06d1238f2", 00:19:35.436 "assigned_rate_limits": { 00:19:35.436 "rw_ios_per_sec": 0, 00:19:35.436 "rw_mbytes_per_sec": 0, 00:19:35.436 "r_mbytes_per_sec": 0, 00:19:35.436 "w_mbytes_per_sec": 0 00:19:35.436 }, 00:19:35.436 "claimed": true, 00:19:35.436 "claim_type": "exclusive_write", 00:19:35.436 "zoned": false, 00:19:35.436 "supported_io_types": { 00:19:35.436 "read": true, 00:19:35.436 "write": true, 00:19:35.436 "unmap": true, 00:19:35.436 "flush": true, 00:19:35.436 "reset": true, 00:19:35.436 "nvme_admin": false, 00:19:35.436 "nvme_io": false, 00:19:35.436 "nvme_io_md": false, 00:19:35.436 "write_zeroes": true, 00:19:35.436 "zcopy": true, 00:19:35.436 "get_zone_info": false, 00:19:35.436 "zone_management": false, 00:19:35.436 "zone_append": false, 00:19:35.436 "compare": false, 00:19:35.436 "compare_and_write": false, 00:19:35.436 "abort": true, 00:19:35.436 "seek_hole": false, 00:19:35.436 "seek_data": false, 00:19:35.436 "copy": true, 00:19:35.436 "nvme_iov_md": false 00:19:35.436 }, 00:19:35.436 "memory_domains": [ 00:19:35.436 { 00:19:35.436 "dma_device_id": "system", 00:19:35.436 "dma_device_type": 1 00:19:35.436 }, 00:19:35.436 { 00:19:35.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.436 "dma_device_type": 2 00:19:35.436 } 00:19:35.436 ], 00:19:35.436 "driver_specific": {} 00:19:35.436 }' 00:19:35.436 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:35.436 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:35.695 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:35.695 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:35.695 13:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:35.695 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:35.695 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:35.695 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:35.695 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:35.695 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:35.695 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:35.954 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:35.954 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:35.954 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:35.954 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:36.214 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:36.214 "name": "BaseBdev4", 00:19:36.214 "aliases": [ 00:19:36.214 "f7dcbaa3-54b5-47cf-9659-acc317cff6bc" 00:19:36.214 ], 00:19:36.214 "product_name": "Malloc disk", 00:19:36.214 "block_size": 512, 00:19:36.214 "num_blocks": 65536, 00:19:36.214 "uuid": "f7dcbaa3-54b5-47cf-9659-acc317cff6bc", 00:19:36.214 "assigned_rate_limits": { 00:19:36.214 "rw_ios_per_sec": 0, 00:19:36.214 "rw_mbytes_per_sec": 0, 00:19:36.214 "r_mbytes_per_sec": 0, 00:19:36.214 "w_mbytes_per_sec": 0 00:19:36.214 }, 00:19:36.214 "claimed": true, 00:19:36.214 "claim_type": "exclusive_write", 00:19:36.214 "zoned": false, 00:19:36.214 "supported_io_types": { 00:19:36.214 "read": true, 00:19:36.214 "write": true, 00:19:36.214 "unmap": true, 00:19:36.214 "flush": true, 00:19:36.214 "reset": true, 00:19:36.214 "nvme_admin": false, 00:19:36.214 "nvme_io": false, 00:19:36.214 "nvme_io_md": false, 00:19:36.214 "write_zeroes": true, 00:19:36.214 "zcopy": true, 00:19:36.214 "get_zone_info": false, 00:19:36.214 "zone_management": false, 00:19:36.214 "zone_append": false, 00:19:36.214 "compare": false, 00:19:36.214 "compare_and_write": false, 00:19:36.214 "abort": true, 00:19:36.214 "seek_hole": false, 00:19:36.214 "seek_data": false, 00:19:36.214 "copy": true, 00:19:36.214 "nvme_iov_md": false 00:19:36.214 }, 00:19:36.214 "memory_domains": [ 00:19:36.214 { 00:19:36.214 "dma_device_id": "system", 00:19:36.214 "dma_device_type": 1 00:19:36.214 }, 00:19:36.214 { 00:19:36.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.214 "dma_device_type": 2 00:19:36.214 } 00:19:36.214 ], 00:19:36.214 "driver_specific": {} 00:19:36.214 }' 00:19:36.214 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.214 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.214 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:36.214 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.214 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.214 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:36.214 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.214 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:36.474 [2024-07-25 13:19:46.936423] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:36.474 [2024-07-25 13:19:46.936450] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.474 [2024-07-25 13:19:46.936494] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:36.474 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:36.733 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.733 13:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.733 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.733 "name": "Existed_Raid", 00:19:36.733 "uuid": "69026393-2d62-43b4-a552-ef339a4d6444", 00:19:36.733 "strip_size_kb": 64, 00:19:36.733 "state": "offline", 00:19:36.733 "raid_level": "concat", 00:19:36.733 "superblock": false, 00:19:36.733 "num_base_bdevs": 4, 00:19:36.733 "num_base_bdevs_discovered": 3, 00:19:36.733 "num_base_bdevs_operational": 3, 00:19:36.733 "base_bdevs_list": [ 00:19:36.733 { 00:19:36.733 "name": null, 00:19:36.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.733 "is_configured": false, 00:19:36.733 "data_offset": 0, 00:19:36.733 "data_size": 65536 00:19:36.733 }, 00:19:36.733 { 00:19:36.733 "name": "BaseBdev2", 00:19:36.733 "uuid": "51568a57-77e6-4123-91c3-ef54a180f69a", 00:19:36.733 "is_configured": true, 00:19:36.733 "data_offset": 0, 00:19:36.733 "data_size": 65536 00:19:36.733 }, 00:19:36.733 { 00:19:36.733 "name": "BaseBdev3", 00:19:36.733 "uuid": "5db7d2c1-1aa6-4b07-897b-55a06d1238f2", 00:19:36.733 "is_configured": true, 00:19:36.733 "data_offset": 0, 00:19:36.733 "data_size": 65536 00:19:36.733 }, 00:19:36.733 { 00:19:36.733 "name": "BaseBdev4", 00:19:36.733 "uuid": "f7dcbaa3-54b5-47cf-9659-acc317cff6bc", 00:19:36.733 "is_configured": true, 00:19:36.733 "data_offset": 0, 00:19:36.733 "data_size": 65536 00:19:36.733 } 00:19:36.733 ] 00:19:36.733 }' 00:19:36.733 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.733 13:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.302 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:37.302 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:37.302 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:37.302 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.561 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:37.562 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:37.562 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:37.821 [2024-07-25 13:19:48.156608] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:37.821 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:37.821 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:37.821 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.821 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:38.081 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:38.081 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:38.081 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:38.340 [2024-07-25 13:19:48.631831] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:38.340 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:38.340 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:38.340 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.340 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:38.599 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:38.599 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:38.599 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:38.859 [2024-07-25 13:19:49.098983] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:38.859 [2024-07-25 13:19:49.099023] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25d1840 name Existed_Raid, state offline 00:19:38.859 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:38.859 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:38.859 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.859 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:39.119 BaseBdev2 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:39.119 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:39.378 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:39.637 [ 00:19:39.637 { 00:19:39.637 "name": "BaseBdev2", 00:19:39.637 "aliases": [ 00:19:39.637 "7ef8dcf5-1f3e-4d6b-9924-106688b03911" 00:19:39.637 ], 00:19:39.637 "product_name": "Malloc disk", 00:19:39.637 "block_size": 512, 00:19:39.637 "num_blocks": 65536, 00:19:39.637 "uuid": "7ef8dcf5-1f3e-4d6b-9924-106688b03911", 00:19:39.638 "assigned_rate_limits": { 00:19:39.638 "rw_ios_per_sec": 0, 00:19:39.638 "rw_mbytes_per_sec": 0, 00:19:39.638 "r_mbytes_per_sec": 0, 00:19:39.638 "w_mbytes_per_sec": 0 00:19:39.638 }, 00:19:39.638 "claimed": false, 00:19:39.638 "zoned": false, 00:19:39.638 "supported_io_types": { 00:19:39.638 "read": true, 00:19:39.638 "write": true, 00:19:39.638 "unmap": true, 00:19:39.638 "flush": true, 00:19:39.638 "reset": true, 00:19:39.638 "nvme_admin": false, 00:19:39.638 "nvme_io": false, 00:19:39.638 "nvme_io_md": false, 00:19:39.638 "write_zeroes": true, 00:19:39.638 "zcopy": true, 00:19:39.638 "get_zone_info": false, 00:19:39.638 "zone_management": false, 00:19:39.638 "zone_append": false, 00:19:39.638 "compare": false, 00:19:39.638 "compare_and_write": false, 00:19:39.638 "abort": true, 00:19:39.638 "seek_hole": false, 00:19:39.638 "seek_data": false, 00:19:39.638 "copy": true, 00:19:39.638 "nvme_iov_md": false 00:19:39.638 }, 00:19:39.638 "memory_domains": [ 00:19:39.638 { 00:19:39.638 "dma_device_id": "system", 00:19:39.638 "dma_device_type": 1 00:19:39.638 }, 00:19:39.638 { 00:19:39.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.638 "dma_device_type": 2 00:19:39.638 } 00:19:39.638 ], 00:19:39.638 "driver_specific": {} 00:19:39.638 } 00:19:39.638 ] 00:19:39.638 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:39.638 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:39.638 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:39.638 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:39.897 BaseBdev3 00:19:39.897 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:39.897 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:39.897 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:39.897 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:39.897 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:39.897 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:39.897 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:40.156 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:40.416 [ 00:19:40.416 { 00:19:40.416 "name": "BaseBdev3", 00:19:40.416 "aliases": [ 00:19:40.416 "7ea97d2a-59cd-40bd-85f9-360142343b85" 00:19:40.416 ], 00:19:40.416 "product_name": "Malloc disk", 00:19:40.416 "block_size": 512, 00:19:40.416 "num_blocks": 65536, 00:19:40.416 "uuid": "7ea97d2a-59cd-40bd-85f9-360142343b85", 00:19:40.416 "assigned_rate_limits": { 00:19:40.416 "rw_ios_per_sec": 0, 00:19:40.416 "rw_mbytes_per_sec": 0, 00:19:40.416 "r_mbytes_per_sec": 0, 00:19:40.416 "w_mbytes_per_sec": 0 00:19:40.416 }, 00:19:40.416 "claimed": false, 00:19:40.416 "zoned": false, 00:19:40.416 "supported_io_types": { 00:19:40.416 "read": true, 00:19:40.416 "write": true, 00:19:40.416 "unmap": true, 00:19:40.416 "flush": true, 00:19:40.416 "reset": true, 00:19:40.416 "nvme_admin": false, 00:19:40.416 "nvme_io": false, 00:19:40.416 "nvme_io_md": false, 00:19:40.416 "write_zeroes": true, 00:19:40.416 "zcopy": true, 00:19:40.416 "get_zone_info": false, 00:19:40.416 "zone_management": false, 00:19:40.416 "zone_append": false, 00:19:40.416 "compare": false, 00:19:40.416 "compare_and_write": false, 00:19:40.416 "abort": true, 00:19:40.416 "seek_hole": false, 00:19:40.416 "seek_data": false, 00:19:40.416 "copy": true, 00:19:40.416 "nvme_iov_md": false 00:19:40.416 }, 00:19:40.416 "memory_domains": [ 00:19:40.416 { 00:19:40.416 "dma_device_id": "system", 00:19:40.416 "dma_device_type": 1 00:19:40.416 }, 00:19:40.416 { 00:19:40.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.416 "dma_device_type": 2 00:19:40.416 } 00:19:40.416 ], 00:19:40.416 "driver_specific": {} 00:19:40.416 } 00:19:40.416 ] 00:19:40.416 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:40.416 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:40.416 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:40.416 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:40.676 BaseBdev4 00:19:40.676 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:40.676 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:40.676 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:40.676 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:40.676 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:40.676 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:40.676 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:40.936 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:40.936 [ 00:19:40.936 { 00:19:40.936 "name": "BaseBdev4", 00:19:40.936 "aliases": [ 00:19:40.936 "99648ec9-036e-442e-83ae-9a5d48396e64" 00:19:40.936 ], 00:19:40.936 "product_name": "Malloc disk", 00:19:40.936 "block_size": 512, 00:19:40.936 "num_blocks": 65536, 00:19:40.936 "uuid": "99648ec9-036e-442e-83ae-9a5d48396e64", 00:19:40.936 "assigned_rate_limits": { 00:19:40.936 "rw_ios_per_sec": 0, 00:19:40.936 "rw_mbytes_per_sec": 0, 00:19:40.936 "r_mbytes_per_sec": 0, 00:19:40.936 "w_mbytes_per_sec": 0 00:19:40.936 }, 00:19:40.936 "claimed": false, 00:19:40.936 "zoned": false, 00:19:40.936 "supported_io_types": { 00:19:40.936 "read": true, 00:19:40.936 "write": true, 00:19:40.936 "unmap": true, 00:19:40.936 "flush": true, 00:19:40.937 "reset": true, 00:19:40.937 "nvme_admin": false, 00:19:40.937 "nvme_io": false, 00:19:40.937 "nvme_io_md": false, 00:19:40.937 "write_zeroes": true, 00:19:40.937 "zcopy": true, 00:19:40.937 "get_zone_info": false, 00:19:40.937 "zone_management": false, 00:19:40.937 "zone_append": false, 00:19:40.937 "compare": false, 00:19:40.937 "compare_and_write": false, 00:19:40.937 "abort": true, 00:19:40.937 "seek_hole": false, 00:19:40.937 "seek_data": false, 00:19:40.937 "copy": true, 00:19:40.937 "nvme_iov_md": false 00:19:40.937 }, 00:19:40.937 "memory_domains": [ 00:19:40.937 { 00:19:40.937 "dma_device_id": "system", 00:19:40.937 "dma_device_type": 1 00:19:40.937 }, 00:19:40.937 { 00:19:40.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.937 "dma_device_type": 2 00:19:40.937 } 00:19:40.937 ], 00:19:40.937 "driver_specific": {} 00:19:40.937 } 00:19:40.937 ] 00:19:40.937 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:40.937 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:40.937 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:40.937 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:41.196 [2024-07-25 13:19:51.588342] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:41.196 [2024-07-25 13:19:51.588382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:41.196 [2024-07-25 13:19:51.588399] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:41.196 [2024-07-25 13:19:51.589659] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:41.196 [2024-07-25 13:19:51.589698] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:41.196 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:41.196 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:41.196 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:41.196 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:41.196 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:41.196 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:41.196 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:41.196 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:41.197 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:41.197 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:41.197 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.197 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.456 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:41.456 "name": "Existed_Raid", 00:19:41.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.456 "strip_size_kb": 64, 00:19:41.456 "state": "configuring", 00:19:41.456 "raid_level": "concat", 00:19:41.456 "superblock": false, 00:19:41.456 "num_base_bdevs": 4, 00:19:41.457 "num_base_bdevs_discovered": 3, 00:19:41.457 "num_base_bdevs_operational": 4, 00:19:41.457 "base_bdevs_list": [ 00:19:41.457 { 00:19:41.457 "name": "BaseBdev1", 00:19:41.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.457 "is_configured": false, 00:19:41.457 "data_offset": 0, 00:19:41.457 "data_size": 0 00:19:41.457 }, 00:19:41.457 { 00:19:41.457 "name": "BaseBdev2", 00:19:41.457 "uuid": "7ef8dcf5-1f3e-4d6b-9924-106688b03911", 00:19:41.457 "is_configured": true, 00:19:41.457 "data_offset": 0, 00:19:41.457 "data_size": 65536 00:19:41.457 }, 00:19:41.457 { 00:19:41.457 "name": "BaseBdev3", 00:19:41.457 "uuid": "7ea97d2a-59cd-40bd-85f9-360142343b85", 00:19:41.457 "is_configured": true, 00:19:41.457 "data_offset": 0, 00:19:41.457 "data_size": 65536 00:19:41.457 }, 00:19:41.457 { 00:19:41.457 "name": "BaseBdev4", 00:19:41.457 "uuid": "99648ec9-036e-442e-83ae-9a5d48396e64", 00:19:41.457 "is_configured": true, 00:19:41.457 "data_offset": 0, 00:19:41.457 "data_size": 65536 00:19:41.457 } 00:19:41.457 ] 00:19:41.457 }' 00:19:41.457 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:41.457 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.025 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:42.284 [2024-07-25 13:19:52.619020] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:42.284 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:42.284 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:42.284 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:42.284 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:42.284 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:42.284 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:42.284 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:42.284 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:42.284 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:42.284 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:42.284 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.284 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.542 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:42.542 "name": "Existed_Raid", 00:19:42.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.542 "strip_size_kb": 64, 00:19:42.542 "state": "configuring", 00:19:42.542 "raid_level": "concat", 00:19:42.542 "superblock": false, 00:19:42.542 "num_base_bdevs": 4, 00:19:42.542 "num_base_bdevs_discovered": 2, 00:19:42.542 "num_base_bdevs_operational": 4, 00:19:42.542 "base_bdevs_list": [ 00:19:42.542 { 00:19:42.542 "name": "BaseBdev1", 00:19:42.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.542 "is_configured": false, 00:19:42.542 "data_offset": 0, 00:19:42.542 "data_size": 0 00:19:42.542 }, 00:19:42.542 { 00:19:42.542 "name": null, 00:19:42.542 "uuid": "7ef8dcf5-1f3e-4d6b-9924-106688b03911", 00:19:42.542 "is_configured": false, 00:19:42.542 "data_offset": 0, 00:19:42.542 "data_size": 65536 00:19:42.542 }, 00:19:42.542 { 00:19:42.542 "name": "BaseBdev3", 00:19:42.542 "uuid": "7ea97d2a-59cd-40bd-85f9-360142343b85", 00:19:42.542 "is_configured": true, 00:19:42.542 "data_offset": 0, 00:19:42.542 "data_size": 65536 00:19:42.542 }, 00:19:42.542 { 00:19:42.542 "name": "BaseBdev4", 00:19:42.542 "uuid": "99648ec9-036e-442e-83ae-9a5d48396e64", 00:19:42.542 "is_configured": true, 00:19:42.542 "data_offset": 0, 00:19:42.542 "data_size": 65536 00:19:42.542 } 00:19:42.542 ] 00:19:42.542 }' 00:19:42.542 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:42.542 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.136 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.136 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:43.394 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:43.394 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:43.653 [2024-07-25 13:19:53.893634] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:43.653 BaseBdev1 00:19:43.653 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:43.653 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:43.653 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:43.653 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:43.653 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:43.653 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:43.653 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:43.653 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:43.911 [ 00:19:43.911 { 00:19:43.911 "name": "BaseBdev1", 00:19:43.911 "aliases": [ 00:19:43.911 "8cb6103e-8992-4cf6-8ef9-d79ab63240c2" 00:19:43.911 ], 00:19:43.911 "product_name": "Malloc disk", 00:19:43.911 "block_size": 512, 00:19:43.911 "num_blocks": 65536, 00:19:43.911 "uuid": "8cb6103e-8992-4cf6-8ef9-d79ab63240c2", 00:19:43.911 "assigned_rate_limits": { 00:19:43.911 "rw_ios_per_sec": 0, 00:19:43.911 "rw_mbytes_per_sec": 0, 00:19:43.911 "r_mbytes_per_sec": 0, 00:19:43.911 "w_mbytes_per_sec": 0 00:19:43.911 }, 00:19:43.911 "claimed": true, 00:19:43.911 "claim_type": "exclusive_write", 00:19:43.911 "zoned": false, 00:19:43.911 "supported_io_types": { 00:19:43.911 "read": true, 00:19:43.911 "write": true, 00:19:43.911 "unmap": true, 00:19:43.911 "flush": true, 00:19:43.911 "reset": true, 00:19:43.911 "nvme_admin": false, 00:19:43.911 "nvme_io": false, 00:19:43.911 "nvme_io_md": false, 00:19:43.911 "write_zeroes": true, 00:19:43.911 "zcopy": true, 00:19:43.911 "get_zone_info": false, 00:19:43.911 "zone_management": false, 00:19:43.911 "zone_append": false, 00:19:43.911 "compare": false, 00:19:43.911 "compare_and_write": false, 00:19:43.911 "abort": true, 00:19:43.911 "seek_hole": false, 00:19:43.911 "seek_data": false, 00:19:43.911 "copy": true, 00:19:43.911 "nvme_iov_md": false 00:19:43.911 }, 00:19:43.911 "memory_domains": [ 00:19:43.911 { 00:19:43.912 "dma_device_id": "system", 00:19:43.912 "dma_device_type": 1 00:19:43.912 }, 00:19:43.912 { 00:19:43.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.912 "dma_device_type": 2 00:19:43.912 } 00:19:43.912 ], 00:19:43.912 "driver_specific": {} 00:19:43.912 } 00:19:43.912 ] 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.912 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.170 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.170 "name": "Existed_Raid", 00:19:44.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.170 "strip_size_kb": 64, 00:19:44.170 "state": "configuring", 00:19:44.170 "raid_level": "concat", 00:19:44.170 "superblock": false, 00:19:44.170 "num_base_bdevs": 4, 00:19:44.170 "num_base_bdevs_discovered": 3, 00:19:44.170 "num_base_bdevs_operational": 4, 00:19:44.170 "base_bdevs_list": [ 00:19:44.170 { 00:19:44.170 "name": "BaseBdev1", 00:19:44.170 "uuid": "8cb6103e-8992-4cf6-8ef9-d79ab63240c2", 00:19:44.170 "is_configured": true, 00:19:44.170 "data_offset": 0, 00:19:44.170 "data_size": 65536 00:19:44.170 }, 00:19:44.170 { 00:19:44.170 "name": null, 00:19:44.170 "uuid": "7ef8dcf5-1f3e-4d6b-9924-106688b03911", 00:19:44.170 "is_configured": false, 00:19:44.170 "data_offset": 0, 00:19:44.170 "data_size": 65536 00:19:44.170 }, 00:19:44.170 { 00:19:44.170 "name": "BaseBdev3", 00:19:44.170 "uuid": "7ea97d2a-59cd-40bd-85f9-360142343b85", 00:19:44.170 "is_configured": true, 00:19:44.170 "data_offset": 0, 00:19:44.170 "data_size": 65536 00:19:44.170 }, 00:19:44.170 { 00:19:44.170 "name": "BaseBdev4", 00:19:44.170 "uuid": "99648ec9-036e-442e-83ae-9a5d48396e64", 00:19:44.170 "is_configured": true, 00:19:44.170 "data_offset": 0, 00:19:44.170 "data_size": 65536 00:19:44.170 } 00:19:44.170 ] 00:19:44.170 }' 00:19:44.170 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.170 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.737 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.737 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:44.996 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:44.996 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:45.254 [2024-07-25 13:19:55.602169] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:45.255 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:45.255 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:45.255 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:45.255 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:45.255 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:45.255 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:45.255 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.255 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.255 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.255 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.255 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.255 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.513 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.513 "name": "Existed_Raid", 00:19:45.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.513 "strip_size_kb": 64, 00:19:45.513 "state": "configuring", 00:19:45.513 "raid_level": "concat", 00:19:45.513 "superblock": false, 00:19:45.513 "num_base_bdevs": 4, 00:19:45.513 "num_base_bdevs_discovered": 2, 00:19:45.513 "num_base_bdevs_operational": 4, 00:19:45.513 "base_bdevs_list": [ 00:19:45.513 { 00:19:45.513 "name": "BaseBdev1", 00:19:45.513 "uuid": "8cb6103e-8992-4cf6-8ef9-d79ab63240c2", 00:19:45.513 "is_configured": true, 00:19:45.513 "data_offset": 0, 00:19:45.513 "data_size": 65536 00:19:45.513 }, 00:19:45.513 { 00:19:45.513 "name": null, 00:19:45.513 "uuid": "7ef8dcf5-1f3e-4d6b-9924-106688b03911", 00:19:45.513 "is_configured": false, 00:19:45.513 "data_offset": 0, 00:19:45.513 "data_size": 65536 00:19:45.513 }, 00:19:45.513 { 00:19:45.513 "name": null, 00:19:45.513 "uuid": "7ea97d2a-59cd-40bd-85f9-360142343b85", 00:19:45.513 "is_configured": false, 00:19:45.513 "data_offset": 0, 00:19:45.513 "data_size": 65536 00:19:45.513 }, 00:19:45.513 { 00:19:45.513 "name": "BaseBdev4", 00:19:45.513 "uuid": "99648ec9-036e-442e-83ae-9a5d48396e64", 00:19:45.513 "is_configured": true, 00:19:45.513 "data_offset": 0, 00:19:45.513 "data_size": 65536 00:19:45.513 } 00:19:45.513 ] 00:19:45.513 }' 00:19:45.513 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.513 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.080 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:46.080 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.339 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:46.339 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:46.598 [2024-07-25 13:19:56.857489] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.598 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:46.598 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:46.598 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:46.598 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:46.598 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:46.598 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:46.598 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.598 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.598 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.598 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.598 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.598 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.857 13:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.857 "name": "Existed_Raid", 00:19:46.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.857 "strip_size_kb": 64, 00:19:46.857 "state": "configuring", 00:19:46.857 "raid_level": "concat", 00:19:46.857 "superblock": false, 00:19:46.857 "num_base_bdevs": 4, 00:19:46.857 "num_base_bdevs_discovered": 3, 00:19:46.857 "num_base_bdevs_operational": 4, 00:19:46.857 "base_bdevs_list": [ 00:19:46.857 { 00:19:46.857 "name": "BaseBdev1", 00:19:46.857 "uuid": "8cb6103e-8992-4cf6-8ef9-d79ab63240c2", 00:19:46.857 "is_configured": true, 00:19:46.857 "data_offset": 0, 00:19:46.857 "data_size": 65536 00:19:46.857 }, 00:19:46.857 { 00:19:46.857 "name": null, 00:19:46.857 "uuid": "7ef8dcf5-1f3e-4d6b-9924-106688b03911", 00:19:46.857 "is_configured": false, 00:19:46.857 "data_offset": 0, 00:19:46.857 "data_size": 65536 00:19:46.857 }, 00:19:46.857 { 00:19:46.857 "name": "BaseBdev3", 00:19:46.857 "uuid": "7ea97d2a-59cd-40bd-85f9-360142343b85", 00:19:46.857 "is_configured": true, 00:19:46.857 "data_offset": 0, 00:19:46.857 "data_size": 65536 00:19:46.857 }, 00:19:46.857 { 00:19:46.857 "name": "BaseBdev4", 00:19:46.857 "uuid": "99648ec9-036e-442e-83ae-9a5d48396e64", 00:19:46.857 "is_configured": true, 00:19:46.857 "data_offset": 0, 00:19:46.857 "data_size": 65536 00:19:46.857 } 00:19:46.857 ] 00:19:46.857 }' 00:19:46.857 13:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.857 13:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.424 13:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.424 13:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:47.424 13:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:47.424 13:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:47.683 [2024-07-25 13:19:58.092751] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:47.683 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:47.683 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:47.683 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:47.683 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:47.683 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:47.683 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:47.683 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.683 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.683 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.683 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.683 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.683 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.942 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.942 "name": "Existed_Raid", 00:19:47.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.942 "strip_size_kb": 64, 00:19:47.942 "state": "configuring", 00:19:47.942 "raid_level": "concat", 00:19:47.942 "superblock": false, 00:19:47.942 "num_base_bdevs": 4, 00:19:47.942 "num_base_bdevs_discovered": 2, 00:19:47.942 "num_base_bdevs_operational": 4, 00:19:47.942 "base_bdevs_list": [ 00:19:47.942 { 00:19:47.942 "name": null, 00:19:47.942 "uuid": "8cb6103e-8992-4cf6-8ef9-d79ab63240c2", 00:19:47.942 "is_configured": false, 00:19:47.942 "data_offset": 0, 00:19:47.942 "data_size": 65536 00:19:47.942 }, 00:19:47.942 { 00:19:47.942 "name": null, 00:19:47.942 "uuid": "7ef8dcf5-1f3e-4d6b-9924-106688b03911", 00:19:47.942 "is_configured": false, 00:19:47.942 "data_offset": 0, 00:19:47.942 "data_size": 65536 00:19:47.942 }, 00:19:47.942 { 00:19:47.942 "name": "BaseBdev3", 00:19:47.942 "uuid": "7ea97d2a-59cd-40bd-85f9-360142343b85", 00:19:47.942 "is_configured": true, 00:19:47.942 "data_offset": 0, 00:19:47.942 "data_size": 65536 00:19:47.942 }, 00:19:47.942 { 00:19:47.942 "name": "BaseBdev4", 00:19:47.942 "uuid": "99648ec9-036e-442e-83ae-9a5d48396e64", 00:19:47.942 "is_configured": true, 00:19:47.942 "data_offset": 0, 00:19:47.942 "data_size": 65536 00:19:47.942 } 00:19:47.942 ] 00:19:47.942 }' 00:19:47.942 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.942 13:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.509 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.509 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:48.767 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:48.767 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:49.026 [2024-07-25 13:19:59.358017] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:49.026 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:49.026 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:49.026 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:49.027 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:49.027 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:49.027 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:49.027 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:49.027 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:49.027 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:49.027 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:49.027 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.027 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.285 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.285 "name": "Existed_Raid", 00:19:49.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.285 "strip_size_kb": 64, 00:19:49.285 "state": "configuring", 00:19:49.285 "raid_level": "concat", 00:19:49.285 "superblock": false, 00:19:49.285 "num_base_bdevs": 4, 00:19:49.285 "num_base_bdevs_discovered": 3, 00:19:49.285 "num_base_bdevs_operational": 4, 00:19:49.285 "base_bdevs_list": [ 00:19:49.285 { 00:19:49.285 "name": null, 00:19:49.285 "uuid": "8cb6103e-8992-4cf6-8ef9-d79ab63240c2", 00:19:49.285 "is_configured": false, 00:19:49.285 "data_offset": 0, 00:19:49.285 "data_size": 65536 00:19:49.285 }, 00:19:49.285 { 00:19:49.285 "name": "BaseBdev2", 00:19:49.285 "uuid": "7ef8dcf5-1f3e-4d6b-9924-106688b03911", 00:19:49.285 "is_configured": true, 00:19:49.285 "data_offset": 0, 00:19:49.285 "data_size": 65536 00:19:49.285 }, 00:19:49.285 { 00:19:49.285 "name": "BaseBdev3", 00:19:49.285 "uuid": "7ea97d2a-59cd-40bd-85f9-360142343b85", 00:19:49.285 "is_configured": true, 00:19:49.285 "data_offset": 0, 00:19:49.285 "data_size": 65536 00:19:49.285 }, 00:19:49.285 { 00:19:49.285 "name": "BaseBdev4", 00:19:49.285 "uuid": "99648ec9-036e-442e-83ae-9a5d48396e64", 00:19:49.285 "is_configured": true, 00:19:49.285 "data_offset": 0, 00:19:49.285 "data_size": 65536 00:19:49.285 } 00:19:49.285 ] 00:19:49.285 }' 00:19:49.285 13:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.285 13:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.853 13:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:49.853 13:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.112 13:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:50.112 13:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.112 13:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:50.371 13:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8cb6103e-8992-4cf6-8ef9-d79ab63240c2 00:19:50.630 [2024-07-25 13:20:00.861209] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:50.630 [2024-07-25 13:20:00.861242] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x25d0360 00:19:50.630 [2024-07-25 13:20:00.861250] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:50.630 [2024-07-25 13:20:00.861434] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2783c10 00:19:50.630 [2024-07-25 13:20:00.861548] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25d0360 00:19:50.630 [2024-07-25 13:20:00.861558] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x25d0360 00:19:50.630 [2024-07-25 13:20:00.861705] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.630 NewBaseBdev 00:19:50.630 13:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:50.630 13:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:50.630 13:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:50.630 13:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:50.630 13:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:50.630 13:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:50.630 13:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:50.630 13:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:50.889 [ 00:19:50.889 { 00:19:50.889 "name": "NewBaseBdev", 00:19:50.889 "aliases": [ 00:19:50.889 "8cb6103e-8992-4cf6-8ef9-d79ab63240c2" 00:19:50.889 ], 00:19:50.889 "product_name": "Malloc disk", 00:19:50.889 "block_size": 512, 00:19:50.889 "num_blocks": 65536, 00:19:50.889 "uuid": "8cb6103e-8992-4cf6-8ef9-d79ab63240c2", 00:19:50.889 "assigned_rate_limits": { 00:19:50.889 "rw_ios_per_sec": 0, 00:19:50.889 "rw_mbytes_per_sec": 0, 00:19:50.889 "r_mbytes_per_sec": 0, 00:19:50.889 "w_mbytes_per_sec": 0 00:19:50.889 }, 00:19:50.889 "claimed": true, 00:19:50.889 "claim_type": "exclusive_write", 00:19:50.889 "zoned": false, 00:19:50.889 "supported_io_types": { 00:19:50.889 "read": true, 00:19:50.889 "write": true, 00:19:50.889 "unmap": true, 00:19:50.889 "flush": true, 00:19:50.889 "reset": true, 00:19:50.889 "nvme_admin": false, 00:19:50.889 "nvme_io": false, 00:19:50.889 "nvme_io_md": false, 00:19:50.889 "write_zeroes": true, 00:19:50.889 "zcopy": true, 00:19:50.889 "get_zone_info": false, 00:19:50.889 "zone_management": false, 00:19:50.889 "zone_append": false, 00:19:50.889 "compare": false, 00:19:50.889 "compare_and_write": false, 00:19:50.889 "abort": true, 00:19:50.889 "seek_hole": false, 00:19:50.889 "seek_data": false, 00:19:50.889 "copy": true, 00:19:50.889 "nvme_iov_md": false 00:19:50.889 }, 00:19:50.889 "memory_domains": [ 00:19:50.889 { 00:19:50.889 "dma_device_id": "system", 00:19:50.889 "dma_device_type": 1 00:19:50.889 }, 00:19:50.889 { 00:19:50.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.889 "dma_device_type": 2 00:19:50.889 } 00:19:50.889 ], 00:19:50.889 "driver_specific": {} 00:19:50.889 } 00:19:50.889 ] 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.889 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.148 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.148 "name": "Existed_Raid", 00:19:51.148 "uuid": "02ebe23c-9962-472c-95ca-94d2414b3402", 00:19:51.148 "strip_size_kb": 64, 00:19:51.148 "state": "online", 00:19:51.148 "raid_level": "concat", 00:19:51.148 "superblock": false, 00:19:51.148 "num_base_bdevs": 4, 00:19:51.148 "num_base_bdevs_discovered": 4, 00:19:51.148 "num_base_bdevs_operational": 4, 00:19:51.148 "base_bdevs_list": [ 00:19:51.148 { 00:19:51.148 "name": "NewBaseBdev", 00:19:51.148 "uuid": "8cb6103e-8992-4cf6-8ef9-d79ab63240c2", 00:19:51.148 "is_configured": true, 00:19:51.148 "data_offset": 0, 00:19:51.148 "data_size": 65536 00:19:51.148 }, 00:19:51.148 { 00:19:51.148 "name": "BaseBdev2", 00:19:51.148 "uuid": "7ef8dcf5-1f3e-4d6b-9924-106688b03911", 00:19:51.148 "is_configured": true, 00:19:51.148 "data_offset": 0, 00:19:51.148 "data_size": 65536 00:19:51.148 }, 00:19:51.148 { 00:19:51.148 "name": "BaseBdev3", 00:19:51.148 "uuid": "7ea97d2a-59cd-40bd-85f9-360142343b85", 00:19:51.148 "is_configured": true, 00:19:51.148 "data_offset": 0, 00:19:51.148 "data_size": 65536 00:19:51.148 }, 00:19:51.148 { 00:19:51.148 "name": "BaseBdev4", 00:19:51.148 "uuid": "99648ec9-036e-442e-83ae-9a5d48396e64", 00:19:51.148 "is_configured": true, 00:19:51.148 "data_offset": 0, 00:19:51.148 "data_size": 65536 00:19:51.148 } 00:19:51.148 ] 00:19:51.148 }' 00:19:51.148 13:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.148 13:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.715 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:51.715 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:51.715 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:51.715 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:51.715 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:51.715 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:51.715 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:51.715 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:51.973 [2024-07-25 13:20:02.341608] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:51.973 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:51.973 "name": "Existed_Raid", 00:19:51.973 "aliases": [ 00:19:51.973 "02ebe23c-9962-472c-95ca-94d2414b3402" 00:19:51.973 ], 00:19:51.973 "product_name": "Raid Volume", 00:19:51.973 "block_size": 512, 00:19:51.973 "num_blocks": 262144, 00:19:51.973 "uuid": "02ebe23c-9962-472c-95ca-94d2414b3402", 00:19:51.973 "assigned_rate_limits": { 00:19:51.973 "rw_ios_per_sec": 0, 00:19:51.973 "rw_mbytes_per_sec": 0, 00:19:51.973 "r_mbytes_per_sec": 0, 00:19:51.973 "w_mbytes_per_sec": 0 00:19:51.973 }, 00:19:51.973 "claimed": false, 00:19:51.973 "zoned": false, 00:19:51.973 "supported_io_types": { 00:19:51.973 "read": true, 00:19:51.973 "write": true, 00:19:51.973 "unmap": true, 00:19:51.973 "flush": true, 00:19:51.973 "reset": true, 00:19:51.973 "nvme_admin": false, 00:19:51.973 "nvme_io": false, 00:19:51.973 "nvme_io_md": false, 00:19:51.973 "write_zeroes": true, 00:19:51.973 "zcopy": false, 00:19:51.973 "get_zone_info": false, 00:19:51.973 "zone_management": false, 00:19:51.973 "zone_append": false, 00:19:51.973 "compare": false, 00:19:51.973 "compare_and_write": false, 00:19:51.973 "abort": false, 00:19:51.973 "seek_hole": false, 00:19:51.973 "seek_data": false, 00:19:51.973 "copy": false, 00:19:51.973 "nvme_iov_md": false 00:19:51.973 }, 00:19:51.973 "memory_domains": [ 00:19:51.973 { 00:19:51.973 "dma_device_id": "system", 00:19:51.973 "dma_device_type": 1 00:19:51.973 }, 00:19:51.973 { 00:19:51.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.973 "dma_device_type": 2 00:19:51.974 }, 00:19:51.974 { 00:19:51.974 "dma_device_id": "system", 00:19:51.974 "dma_device_type": 1 00:19:51.974 }, 00:19:51.974 { 00:19:51.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.974 "dma_device_type": 2 00:19:51.974 }, 00:19:51.974 { 00:19:51.974 "dma_device_id": "system", 00:19:51.974 "dma_device_type": 1 00:19:51.974 }, 00:19:51.974 { 00:19:51.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.974 "dma_device_type": 2 00:19:51.974 }, 00:19:51.974 { 00:19:51.974 "dma_device_id": "system", 00:19:51.974 "dma_device_type": 1 00:19:51.974 }, 00:19:51.974 { 00:19:51.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.974 "dma_device_type": 2 00:19:51.974 } 00:19:51.974 ], 00:19:51.974 "driver_specific": { 00:19:51.974 "raid": { 00:19:51.974 "uuid": "02ebe23c-9962-472c-95ca-94d2414b3402", 00:19:51.974 "strip_size_kb": 64, 00:19:51.974 "state": "online", 00:19:51.974 "raid_level": "concat", 00:19:51.974 "superblock": false, 00:19:51.974 "num_base_bdevs": 4, 00:19:51.974 "num_base_bdevs_discovered": 4, 00:19:51.974 "num_base_bdevs_operational": 4, 00:19:51.974 "base_bdevs_list": [ 00:19:51.974 { 00:19:51.974 "name": "NewBaseBdev", 00:19:51.974 "uuid": "8cb6103e-8992-4cf6-8ef9-d79ab63240c2", 00:19:51.974 "is_configured": true, 00:19:51.974 "data_offset": 0, 00:19:51.974 "data_size": 65536 00:19:51.974 }, 00:19:51.974 { 00:19:51.974 "name": "BaseBdev2", 00:19:51.974 "uuid": "7ef8dcf5-1f3e-4d6b-9924-106688b03911", 00:19:51.974 "is_configured": true, 00:19:51.974 "data_offset": 0, 00:19:51.974 "data_size": 65536 00:19:51.974 }, 00:19:51.974 { 00:19:51.974 "name": "BaseBdev3", 00:19:51.974 "uuid": "7ea97d2a-59cd-40bd-85f9-360142343b85", 00:19:51.974 "is_configured": true, 00:19:51.974 "data_offset": 0, 00:19:51.974 "data_size": 65536 00:19:51.974 }, 00:19:51.974 { 00:19:51.974 "name": "BaseBdev4", 00:19:51.974 "uuid": "99648ec9-036e-442e-83ae-9a5d48396e64", 00:19:51.974 "is_configured": true, 00:19:51.974 "data_offset": 0, 00:19:51.974 "data_size": 65536 00:19:51.974 } 00:19:51.974 ] 00:19:51.974 } 00:19:51.974 } 00:19:51.974 }' 00:19:51.974 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:51.974 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:51.974 BaseBdev2 00:19:51.974 BaseBdev3 00:19:51.974 BaseBdev4' 00:19:51.974 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:51.974 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:51.974 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:52.232 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:52.232 "name": "NewBaseBdev", 00:19:52.232 "aliases": [ 00:19:52.232 "8cb6103e-8992-4cf6-8ef9-d79ab63240c2" 00:19:52.232 ], 00:19:52.232 "product_name": "Malloc disk", 00:19:52.232 "block_size": 512, 00:19:52.233 "num_blocks": 65536, 00:19:52.233 "uuid": "8cb6103e-8992-4cf6-8ef9-d79ab63240c2", 00:19:52.233 "assigned_rate_limits": { 00:19:52.233 "rw_ios_per_sec": 0, 00:19:52.233 "rw_mbytes_per_sec": 0, 00:19:52.233 "r_mbytes_per_sec": 0, 00:19:52.233 "w_mbytes_per_sec": 0 00:19:52.233 }, 00:19:52.233 "claimed": true, 00:19:52.233 "claim_type": "exclusive_write", 00:19:52.233 "zoned": false, 00:19:52.233 "supported_io_types": { 00:19:52.233 "read": true, 00:19:52.233 "write": true, 00:19:52.233 "unmap": true, 00:19:52.233 "flush": true, 00:19:52.233 "reset": true, 00:19:52.233 "nvme_admin": false, 00:19:52.233 "nvme_io": false, 00:19:52.233 "nvme_io_md": false, 00:19:52.233 "write_zeroes": true, 00:19:52.233 "zcopy": true, 00:19:52.233 "get_zone_info": false, 00:19:52.233 "zone_management": false, 00:19:52.233 "zone_append": false, 00:19:52.233 "compare": false, 00:19:52.233 "compare_and_write": false, 00:19:52.233 "abort": true, 00:19:52.233 "seek_hole": false, 00:19:52.233 "seek_data": false, 00:19:52.233 "copy": true, 00:19:52.233 "nvme_iov_md": false 00:19:52.233 }, 00:19:52.233 "memory_domains": [ 00:19:52.233 { 00:19:52.233 "dma_device_id": "system", 00:19:52.233 "dma_device_type": 1 00:19:52.233 }, 00:19:52.233 { 00:19:52.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.233 "dma_device_type": 2 00:19:52.233 } 00:19:52.233 ], 00:19:52.233 "driver_specific": {} 00:19:52.233 }' 00:19:52.233 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:52.233 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:52.233 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:52.233 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:52.492 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:52.492 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:52.492 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:52.492 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:52.492 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:52.492 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:52.492 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:52.492 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:52.492 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:52.492 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:52.492 13:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:52.751 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:52.751 "name": "BaseBdev2", 00:19:52.751 "aliases": [ 00:19:52.751 "7ef8dcf5-1f3e-4d6b-9924-106688b03911" 00:19:52.751 ], 00:19:52.751 "product_name": "Malloc disk", 00:19:52.751 "block_size": 512, 00:19:52.751 "num_blocks": 65536, 00:19:52.751 "uuid": "7ef8dcf5-1f3e-4d6b-9924-106688b03911", 00:19:52.751 "assigned_rate_limits": { 00:19:52.751 "rw_ios_per_sec": 0, 00:19:52.751 "rw_mbytes_per_sec": 0, 00:19:52.751 "r_mbytes_per_sec": 0, 00:19:52.751 "w_mbytes_per_sec": 0 00:19:52.751 }, 00:19:52.751 "claimed": true, 00:19:52.751 "claim_type": "exclusive_write", 00:19:52.751 "zoned": false, 00:19:52.751 "supported_io_types": { 00:19:52.751 "read": true, 00:19:52.751 "write": true, 00:19:52.751 "unmap": true, 00:19:52.751 "flush": true, 00:19:52.751 "reset": true, 00:19:52.751 "nvme_admin": false, 00:19:52.751 "nvme_io": false, 00:19:52.751 "nvme_io_md": false, 00:19:52.751 "write_zeroes": true, 00:19:52.751 "zcopy": true, 00:19:52.751 "get_zone_info": false, 00:19:52.751 "zone_management": false, 00:19:52.751 "zone_append": false, 00:19:52.751 "compare": false, 00:19:52.751 "compare_and_write": false, 00:19:52.751 "abort": true, 00:19:52.751 "seek_hole": false, 00:19:52.751 "seek_data": false, 00:19:52.751 "copy": true, 00:19:52.751 "nvme_iov_md": false 00:19:52.751 }, 00:19:52.751 "memory_domains": [ 00:19:52.751 { 00:19:52.751 "dma_device_id": "system", 00:19:52.751 "dma_device_type": 1 00:19:52.751 }, 00:19:52.751 { 00:19:52.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.751 "dma_device_type": 2 00:19:52.751 } 00:19:52.751 ], 00:19:52.751 "driver_specific": {} 00:19:52.751 }' 00:19:52.751 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:52.751 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:53.008 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:53.008 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:53.008 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:53.008 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:53.008 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:53.008 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:53.008 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:53.008 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:53.009 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:53.267 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:53.267 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:53.267 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:53.267 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:53.526 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:53.526 "name": "BaseBdev3", 00:19:53.526 "aliases": [ 00:19:53.526 "7ea97d2a-59cd-40bd-85f9-360142343b85" 00:19:53.526 ], 00:19:53.526 "product_name": "Malloc disk", 00:19:53.526 "block_size": 512, 00:19:53.526 "num_blocks": 65536, 00:19:53.527 "uuid": "7ea97d2a-59cd-40bd-85f9-360142343b85", 00:19:53.527 "assigned_rate_limits": { 00:19:53.527 "rw_ios_per_sec": 0, 00:19:53.527 "rw_mbytes_per_sec": 0, 00:19:53.527 "r_mbytes_per_sec": 0, 00:19:53.527 "w_mbytes_per_sec": 0 00:19:53.527 }, 00:19:53.527 "claimed": true, 00:19:53.527 "claim_type": "exclusive_write", 00:19:53.527 "zoned": false, 00:19:53.527 "supported_io_types": { 00:19:53.527 "read": true, 00:19:53.527 "write": true, 00:19:53.527 "unmap": true, 00:19:53.527 "flush": true, 00:19:53.527 "reset": true, 00:19:53.527 "nvme_admin": false, 00:19:53.527 "nvme_io": false, 00:19:53.527 "nvme_io_md": false, 00:19:53.527 "write_zeroes": true, 00:19:53.527 "zcopy": true, 00:19:53.527 "get_zone_info": false, 00:19:53.527 "zone_management": false, 00:19:53.527 "zone_append": false, 00:19:53.527 "compare": false, 00:19:53.527 "compare_and_write": false, 00:19:53.527 "abort": true, 00:19:53.527 "seek_hole": false, 00:19:53.527 "seek_data": false, 00:19:53.527 "copy": true, 00:19:53.527 "nvme_iov_md": false 00:19:53.527 }, 00:19:53.527 "memory_domains": [ 00:19:53.527 { 00:19:53.527 "dma_device_id": "system", 00:19:53.527 "dma_device_type": 1 00:19:53.527 }, 00:19:53.527 { 00:19:53.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.527 "dma_device_type": 2 00:19:53.527 } 00:19:53.527 ], 00:19:53.527 "driver_specific": {} 00:19:53.527 }' 00:19:53.527 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:53.527 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:53.527 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:53.527 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:53.527 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:53.527 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:53.527 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:53.527 13:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:53.786 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:53.786 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:53.786 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:53.786 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:53.786 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:53.786 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:53.786 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:54.045 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:54.045 "name": "BaseBdev4", 00:19:54.045 "aliases": [ 00:19:54.045 "99648ec9-036e-442e-83ae-9a5d48396e64" 00:19:54.045 ], 00:19:54.045 "product_name": "Malloc disk", 00:19:54.045 "block_size": 512, 00:19:54.045 "num_blocks": 65536, 00:19:54.045 "uuid": "99648ec9-036e-442e-83ae-9a5d48396e64", 00:19:54.045 "assigned_rate_limits": { 00:19:54.045 "rw_ios_per_sec": 0, 00:19:54.045 "rw_mbytes_per_sec": 0, 00:19:54.045 "r_mbytes_per_sec": 0, 00:19:54.045 "w_mbytes_per_sec": 0 00:19:54.045 }, 00:19:54.045 "claimed": true, 00:19:54.045 "claim_type": "exclusive_write", 00:19:54.046 "zoned": false, 00:19:54.046 "supported_io_types": { 00:19:54.046 "read": true, 00:19:54.046 "write": true, 00:19:54.046 "unmap": true, 00:19:54.046 "flush": true, 00:19:54.046 "reset": true, 00:19:54.046 "nvme_admin": false, 00:19:54.046 "nvme_io": false, 00:19:54.046 "nvme_io_md": false, 00:19:54.046 "write_zeroes": true, 00:19:54.046 "zcopy": true, 00:19:54.046 "get_zone_info": false, 00:19:54.046 "zone_management": false, 00:19:54.046 "zone_append": false, 00:19:54.046 "compare": false, 00:19:54.046 "compare_and_write": false, 00:19:54.046 "abort": true, 00:19:54.046 "seek_hole": false, 00:19:54.046 "seek_data": false, 00:19:54.046 "copy": true, 00:19:54.046 "nvme_iov_md": false 00:19:54.046 }, 00:19:54.046 "memory_domains": [ 00:19:54.046 { 00:19:54.046 "dma_device_id": "system", 00:19:54.046 "dma_device_type": 1 00:19:54.046 }, 00:19:54.046 { 00:19:54.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.046 "dma_device_type": 2 00:19:54.046 } 00:19:54.046 ], 00:19:54.046 "driver_specific": {} 00:19:54.046 }' 00:19:54.046 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.046 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.046 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:54.046 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.046 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.046 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:54.046 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:54.305 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:54.305 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:54.305 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:54.305 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:54.305 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:54.305 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:54.565 [2024-07-25 13:20:04.888209] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:54.565 [2024-07-25 13:20:04.888240] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:54.565 [2024-07-25 13:20:04.888289] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.565 [2024-07-25 13:20:04.888342] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.565 [2024-07-25 13:20:04.888353] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25d0360 name Existed_Raid, state offline 00:19:54.565 13:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 917472 00:19:54.565 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 917472 ']' 00:19:54.565 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 917472 00:19:54.565 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:19:54.565 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:54.565 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 917472 00:19:54.565 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:54.565 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:54.565 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 917472' 00:19:54.565 killing process with pid 917472 00:19:54.565 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 917472 00:19:54.565 [2024-07-25 13:20:04.960975] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:54.565 13:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 917472 00:19:54.565 [2024-07-25 13:20:04.991640] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:54.825 00:19:54.825 real 0m30.613s 00:19:54.825 user 0m56.215s 00:19:54.825 sys 0m5.502s 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.825 ************************************ 00:19:54.825 END TEST raid_state_function_test 00:19:54.825 ************************************ 00:19:54.825 13:20:05 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:54.825 13:20:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:54.825 13:20:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:54.825 13:20:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.825 ************************************ 00:19:54.825 START TEST raid_state_function_test_sb 00:19:54.825 ************************************ 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=923425 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 923425' 00:19:54.825 Process raid pid: 923425 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 923425 /var/tmp/spdk-raid.sock 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 923425 ']' 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:54.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:54.825 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.085 [2024-07-25 13:20:05.333148] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:19:55.085 [2024-07-25 13:20:05.333205] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:55.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:55.085 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:55.085 [2024-07-25 13:20:05.465659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.085 [2024-07-25 13:20:05.551423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.345 [2024-07-25 13:20:05.616240] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.345 [2024-07-25 13:20:05.616271] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.936 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:55.936 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:19:55.936 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:56.196 [2024-07-25 13:20:06.439648] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:56.196 [2024-07-25 13:20:06.439686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:56.196 [2024-07-25 13:20:06.439700] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:56.196 [2024-07-25 13:20:06.439711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:56.196 [2024-07-25 13:20:06.439719] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:56.196 [2024-07-25 13:20:06.439729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:56.196 [2024-07-25 13:20:06.439737] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:56.196 [2024-07-25 13:20:06.439747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:56.196 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:56.196 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:56.196 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:56.196 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:56.196 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:56.196 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:56.196 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:56.196 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:56.196 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:56.196 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:56.196 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.196 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.455 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:56.455 "name": "Existed_Raid", 00:19:56.455 "uuid": "7292dfe8-e8d3-40ab-9f87-e998d5014dbd", 00:19:56.455 "strip_size_kb": 64, 00:19:56.455 "state": "configuring", 00:19:56.455 "raid_level": "concat", 00:19:56.455 "superblock": true, 00:19:56.455 "num_base_bdevs": 4, 00:19:56.455 "num_base_bdevs_discovered": 0, 00:19:56.455 "num_base_bdevs_operational": 4, 00:19:56.455 "base_bdevs_list": [ 00:19:56.455 { 00:19:56.455 "name": "BaseBdev1", 00:19:56.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.455 "is_configured": false, 00:19:56.455 "data_offset": 0, 00:19:56.455 "data_size": 0 00:19:56.455 }, 00:19:56.455 { 00:19:56.455 "name": "BaseBdev2", 00:19:56.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.455 "is_configured": false, 00:19:56.455 "data_offset": 0, 00:19:56.455 "data_size": 0 00:19:56.455 }, 00:19:56.455 { 00:19:56.455 "name": "BaseBdev3", 00:19:56.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.455 "is_configured": false, 00:19:56.455 "data_offset": 0, 00:19:56.455 "data_size": 0 00:19:56.455 }, 00:19:56.455 { 00:19:56.455 "name": "BaseBdev4", 00:19:56.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.455 "is_configured": false, 00:19:56.455 "data_offset": 0, 00:19:56.455 "data_size": 0 00:19:56.455 } 00:19:56.455 ] 00:19:56.455 }' 00:19:56.455 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:56.455 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.024 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:57.024 [2024-07-25 13:20:07.434120] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:57.024 [2024-07-25 13:20:07.434156] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fb1f60 name Existed_Raid, state configuring 00:19:57.024 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:57.283 [2024-07-25 13:20:07.658744] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:57.283 [2024-07-25 13:20:07.658770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:57.283 [2024-07-25 13:20:07.658779] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:57.283 [2024-07-25 13:20:07.658790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:57.283 [2024-07-25 13:20:07.658798] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:57.283 [2024-07-25 13:20:07.658808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:57.283 [2024-07-25 13:20:07.658816] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:57.283 [2024-07-25 13:20:07.658826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:57.283 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:57.542 [2024-07-25 13:20:07.896889] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.542 BaseBdev1 00:19:57.542 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:57.542 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:57.542 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:57.542 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:57.542 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:57.542 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:57.542 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:57.801 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:58.061 [ 00:19:58.061 { 00:19:58.061 "name": "BaseBdev1", 00:19:58.061 "aliases": [ 00:19:58.061 "aeced770-0222-4aa9-bb33-a6dd51118c55" 00:19:58.061 ], 00:19:58.061 "product_name": "Malloc disk", 00:19:58.061 "block_size": 512, 00:19:58.061 "num_blocks": 65536, 00:19:58.061 "uuid": "aeced770-0222-4aa9-bb33-a6dd51118c55", 00:19:58.061 "assigned_rate_limits": { 00:19:58.061 "rw_ios_per_sec": 0, 00:19:58.061 "rw_mbytes_per_sec": 0, 00:19:58.061 "r_mbytes_per_sec": 0, 00:19:58.061 "w_mbytes_per_sec": 0 00:19:58.061 }, 00:19:58.061 "claimed": true, 00:19:58.061 "claim_type": "exclusive_write", 00:19:58.061 "zoned": false, 00:19:58.061 "supported_io_types": { 00:19:58.061 "read": true, 00:19:58.061 "write": true, 00:19:58.061 "unmap": true, 00:19:58.061 "flush": true, 00:19:58.061 "reset": true, 00:19:58.061 "nvme_admin": false, 00:19:58.061 "nvme_io": false, 00:19:58.061 "nvme_io_md": false, 00:19:58.061 "write_zeroes": true, 00:19:58.061 "zcopy": true, 00:19:58.061 "get_zone_info": false, 00:19:58.061 "zone_management": false, 00:19:58.061 "zone_append": false, 00:19:58.061 "compare": false, 00:19:58.061 "compare_and_write": false, 00:19:58.061 "abort": true, 00:19:58.061 "seek_hole": false, 00:19:58.061 "seek_data": false, 00:19:58.061 "copy": true, 00:19:58.061 "nvme_iov_md": false 00:19:58.061 }, 00:19:58.061 "memory_domains": [ 00:19:58.061 { 00:19:58.061 "dma_device_id": "system", 00:19:58.061 "dma_device_type": 1 00:19:58.061 }, 00:19:58.061 { 00:19:58.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.061 "dma_device_type": 2 00:19:58.061 } 00:19:58.061 ], 00:19:58.061 "driver_specific": {} 00:19:58.061 } 00:19:58.061 ] 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.061 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.321 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:58.321 "name": "Existed_Raid", 00:19:58.321 "uuid": "d9561a8f-9b7a-4cf8-b2b4-5ee849fd760b", 00:19:58.321 "strip_size_kb": 64, 00:19:58.321 "state": "configuring", 00:19:58.321 "raid_level": "concat", 00:19:58.321 "superblock": true, 00:19:58.321 "num_base_bdevs": 4, 00:19:58.321 "num_base_bdevs_discovered": 1, 00:19:58.321 "num_base_bdevs_operational": 4, 00:19:58.321 "base_bdevs_list": [ 00:19:58.321 { 00:19:58.321 "name": "BaseBdev1", 00:19:58.321 "uuid": "aeced770-0222-4aa9-bb33-a6dd51118c55", 00:19:58.321 "is_configured": true, 00:19:58.321 "data_offset": 2048, 00:19:58.321 "data_size": 63488 00:19:58.321 }, 00:19:58.321 { 00:19:58.321 "name": "BaseBdev2", 00:19:58.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.321 "is_configured": false, 00:19:58.321 "data_offset": 0, 00:19:58.321 "data_size": 0 00:19:58.321 }, 00:19:58.321 { 00:19:58.321 "name": "BaseBdev3", 00:19:58.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.321 "is_configured": false, 00:19:58.321 "data_offset": 0, 00:19:58.321 "data_size": 0 00:19:58.321 }, 00:19:58.321 { 00:19:58.321 "name": "BaseBdev4", 00:19:58.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.321 "is_configured": false, 00:19:58.321 "data_offset": 0, 00:19:58.321 "data_size": 0 00:19:58.321 } 00:19:58.321 ] 00:19:58.321 }' 00:19:58.321 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:58.321 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.888 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:58.888 [2024-07-25 13:20:09.360756] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:58.888 [2024-07-25 13:20:09.360791] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fb17d0 name Existed_Raid, state configuring 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:59.148 [2024-07-25 13:20:09.589408] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:59.148 [2024-07-25 13:20:09.590789] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:59.148 [2024-07-25 13:20:09.590821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:59.148 [2024-07-25 13:20:09.590830] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:59.148 [2024-07-25 13:20:09.590841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:59.148 [2024-07-25 13:20:09.590849] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:59.148 [2024-07-25 13:20:09.590859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.148 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.407 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.407 "name": "Existed_Raid", 00:19:59.407 "uuid": "319caaef-bd04-4bdd-bc08-3b090756f824", 00:19:59.407 "strip_size_kb": 64, 00:19:59.407 "state": "configuring", 00:19:59.407 "raid_level": "concat", 00:19:59.407 "superblock": true, 00:19:59.407 "num_base_bdevs": 4, 00:19:59.407 "num_base_bdevs_discovered": 1, 00:19:59.407 "num_base_bdevs_operational": 4, 00:19:59.407 "base_bdevs_list": [ 00:19:59.407 { 00:19:59.407 "name": "BaseBdev1", 00:19:59.407 "uuid": "aeced770-0222-4aa9-bb33-a6dd51118c55", 00:19:59.407 "is_configured": true, 00:19:59.407 "data_offset": 2048, 00:19:59.407 "data_size": 63488 00:19:59.407 }, 00:19:59.407 { 00:19:59.407 "name": "BaseBdev2", 00:19:59.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.407 "is_configured": false, 00:19:59.407 "data_offset": 0, 00:19:59.407 "data_size": 0 00:19:59.407 }, 00:19:59.407 { 00:19:59.407 "name": "BaseBdev3", 00:19:59.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.407 "is_configured": false, 00:19:59.407 "data_offset": 0, 00:19:59.407 "data_size": 0 00:19:59.407 }, 00:19:59.407 { 00:19:59.407 "name": "BaseBdev4", 00:19:59.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.407 "is_configured": false, 00:19:59.407 "data_offset": 0, 00:19:59.407 "data_size": 0 00:19:59.407 } 00:19:59.407 ] 00:19:59.407 }' 00:19:59.407 13:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.407 13:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.974 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:00.233 [2024-07-25 13:20:10.623187] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:00.233 BaseBdev2 00:20:00.233 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:00.233 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:00.233 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:00.233 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:00.233 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:00.233 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:00.233 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:00.493 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:00.753 [ 00:20:00.753 { 00:20:00.753 "name": "BaseBdev2", 00:20:00.753 "aliases": [ 00:20:00.753 "aabc87db-c64a-4c4c-b221-a957ddec768b" 00:20:00.753 ], 00:20:00.753 "product_name": "Malloc disk", 00:20:00.753 "block_size": 512, 00:20:00.753 "num_blocks": 65536, 00:20:00.753 "uuid": "aabc87db-c64a-4c4c-b221-a957ddec768b", 00:20:00.753 "assigned_rate_limits": { 00:20:00.753 "rw_ios_per_sec": 0, 00:20:00.753 "rw_mbytes_per_sec": 0, 00:20:00.753 "r_mbytes_per_sec": 0, 00:20:00.753 "w_mbytes_per_sec": 0 00:20:00.753 }, 00:20:00.753 "claimed": true, 00:20:00.753 "claim_type": "exclusive_write", 00:20:00.753 "zoned": false, 00:20:00.753 "supported_io_types": { 00:20:00.753 "read": true, 00:20:00.753 "write": true, 00:20:00.753 "unmap": true, 00:20:00.753 "flush": true, 00:20:00.753 "reset": true, 00:20:00.753 "nvme_admin": false, 00:20:00.753 "nvme_io": false, 00:20:00.753 "nvme_io_md": false, 00:20:00.753 "write_zeroes": true, 00:20:00.753 "zcopy": true, 00:20:00.753 "get_zone_info": false, 00:20:00.753 "zone_management": false, 00:20:00.753 "zone_append": false, 00:20:00.753 "compare": false, 00:20:00.753 "compare_and_write": false, 00:20:00.753 "abort": true, 00:20:00.753 "seek_hole": false, 00:20:00.753 "seek_data": false, 00:20:00.753 "copy": true, 00:20:00.753 "nvme_iov_md": false 00:20:00.753 }, 00:20:00.753 "memory_domains": [ 00:20:00.753 { 00:20:00.753 "dma_device_id": "system", 00:20:00.753 "dma_device_type": 1 00:20:00.753 }, 00:20:00.753 { 00:20:00.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.753 "dma_device_type": 2 00:20:00.753 } 00:20:00.753 ], 00:20:00.753 "driver_specific": {} 00:20:00.753 } 00:20:00.753 ] 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.753 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.012 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.012 "name": "Existed_Raid", 00:20:01.012 "uuid": "319caaef-bd04-4bdd-bc08-3b090756f824", 00:20:01.012 "strip_size_kb": 64, 00:20:01.012 "state": "configuring", 00:20:01.012 "raid_level": "concat", 00:20:01.012 "superblock": true, 00:20:01.012 "num_base_bdevs": 4, 00:20:01.012 "num_base_bdevs_discovered": 2, 00:20:01.012 "num_base_bdevs_operational": 4, 00:20:01.012 "base_bdevs_list": [ 00:20:01.012 { 00:20:01.012 "name": "BaseBdev1", 00:20:01.012 "uuid": "aeced770-0222-4aa9-bb33-a6dd51118c55", 00:20:01.012 "is_configured": true, 00:20:01.012 "data_offset": 2048, 00:20:01.012 "data_size": 63488 00:20:01.012 }, 00:20:01.012 { 00:20:01.012 "name": "BaseBdev2", 00:20:01.012 "uuid": "aabc87db-c64a-4c4c-b221-a957ddec768b", 00:20:01.012 "is_configured": true, 00:20:01.012 "data_offset": 2048, 00:20:01.012 "data_size": 63488 00:20:01.012 }, 00:20:01.013 { 00:20:01.013 "name": "BaseBdev3", 00:20:01.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.013 "is_configured": false, 00:20:01.013 "data_offset": 0, 00:20:01.013 "data_size": 0 00:20:01.013 }, 00:20:01.013 { 00:20:01.013 "name": "BaseBdev4", 00:20:01.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.013 "is_configured": false, 00:20:01.013 "data_offset": 0, 00:20:01.013 "data_size": 0 00:20:01.013 } 00:20:01.013 ] 00:20:01.013 }' 00:20:01.013 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.013 13:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.580 13:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:01.840 [2024-07-25 13:20:12.110275] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:01.840 BaseBdev3 00:20:01.840 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:01.840 13:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:01.840 13:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:01.840 13:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:01.840 13:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:01.840 13:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:01.840 13:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:02.099 [ 00:20:02.099 { 00:20:02.099 "name": "BaseBdev3", 00:20:02.099 "aliases": [ 00:20:02.099 "98096b2a-072a-4c52-bb56-42976c178acb" 00:20:02.099 ], 00:20:02.099 "product_name": "Malloc disk", 00:20:02.099 "block_size": 512, 00:20:02.099 "num_blocks": 65536, 00:20:02.099 "uuid": "98096b2a-072a-4c52-bb56-42976c178acb", 00:20:02.099 "assigned_rate_limits": { 00:20:02.099 "rw_ios_per_sec": 0, 00:20:02.099 "rw_mbytes_per_sec": 0, 00:20:02.099 "r_mbytes_per_sec": 0, 00:20:02.099 "w_mbytes_per_sec": 0 00:20:02.099 }, 00:20:02.099 "claimed": true, 00:20:02.099 "claim_type": "exclusive_write", 00:20:02.099 "zoned": false, 00:20:02.099 "supported_io_types": { 00:20:02.099 "read": true, 00:20:02.099 "write": true, 00:20:02.099 "unmap": true, 00:20:02.099 "flush": true, 00:20:02.099 "reset": true, 00:20:02.099 "nvme_admin": false, 00:20:02.099 "nvme_io": false, 00:20:02.099 "nvme_io_md": false, 00:20:02.099 "write_zeroes": true, 00:20:02.099 "zcopy": true, 00:20:02.099 "get_zone_info": false, 00:20:02.099 "zone_management": false, 00:20:02.099 "zone_append": false, 00:20:02.099 "compare": false, 00:20:02.099 "compare_and_write": false, 00:20:02.099 "abort": true, 00:20:02.099 "seek_hole": false, 00:20:02.099 "seek_data": false, 00:20:02.099 "copy": true, 00:20:02.099 "nvme_iov_md": false 00:20:02.099 }, 00:20:02.099 "memory_domains": [ 00:20:02.099 { 00:20:02.099 "dma_device_id": "system", 00:20:02.099 "dma_device_type": 1 00:20:02.099 }, 00:20:02.099 { 00:20:02.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.099 "dma_device_type": 2 00:20:02.099 } 00:20:02.099 ], 00:20:02.099 "driver_specific": {} 00:20:02.099 } 00:20:02.099 ] 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.099 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.359 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.359 "name": "Existed_Raid", 00:20:02.359 "uuid": "319caaef-bd04-4bdd-bc08-3b090756f824", 00:20:02.359 "strip_size_kb": 64, 00:20:02.359 "state": "configuring", 00:20:02.359 "raid_level": "concat", 00:20:02.359 "superblock": true, 00:20:02.359 "num_base_bdevs": 4, 00:20:02.359 "num_base_bdevs_discovered": 3, 00:20:02.359 "num_base_bdevs_operational": 4, 00:20:02.359 "base_bdevs_list": [ 00:20:02.359 { 00:20:02.359 "name": "BaseBdev1", 00:20:02.359 "uuid": "aeced770-0222-4aa9-bb33-a6dd51118c55", 00:20:02.359 "is_configured": true, 00:20:02.359 "data_offset": 2048, 00:20:02.359 "data_size": 63488 00:20:02.359 }, 00:20:02.359 { 00:20:02.359 "name": "BaseBdev2", 00:20:02.359 "uuid": "aabc87db-c64a-4c4c-b221-a957ddec768b", 00:20:02.359 "is_configured": true, 00:20:02.359 "data_offset": 2048, 00:20:02.359 "data_size": 63488 00:20:02.359 }, 00:20:02.359 { 00:20:02.359 "name": "BaseBdev3", 00:20:02.359 "uuid": "98096b2a-072a-4c52-bb56-42976c178acb", 00:20:02.359 "is_configured": true, 00:20:02.359 "data_offset": 2048, 00:20:02.359 "data_size": 63488 00:20:02.359 }, 00:20:02.359 { 00:20:02.359 "name": "BaseBdev4", 00:20:02.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.359 "is_configured": false, 00:20:02.359 "data_offset": 0, 00:20:02.359 "data_size": 0 00:20:02.359 } 00:20:02.359 ] 00:20:02.359 }' 00:20:02.359 13:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.359 13:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.933 13:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:03.192 [2024-07-25 13:20:13.545261] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:03.192 [2024-07-25 13:20:13.545405] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fb2840 00:20:03.192 [2024-07-25 13:20:13.545417] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:03.192 [2024-07-25 13:20:13.545574] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fb2480 00:20:03.192 [2024-07-25 13:20:13.545687] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fb2840 00:20:03.192 [2024-07-25 13:20:13.545696] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1fb2840 00:20:03.192 [2024-07-25 13:20:13.545776] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.192 BaseBdev4 00:20:03.192 13:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:03.192 13:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:03.192 13:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:03.192 13:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:03.192 13:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:03.192 13:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:03.192 13:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:03.451 13:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:03.710 [ 00:20:03.710 { 00:20:03.710 "name": "BaseBdev4", 00:20:03.710 "aliases": [ 00:20:03.710 "f7480b47-2df3-41d5-8ae8-392f2c55ec27" 00:20:03.710 ], 00:20:03.710 "product_name": "Malloc disk", 00:20:03.710 "block_size": 512, 00:20:03.710 "num_blocks": 65536, 00:20:03.710 "uuid": "f7480b47-2df3-41d5-8ae8-392f2c55ec27", 00:20:03.710 "assigned_rate_limits": { 00:20:03.710 "rw_ios_per_sec": 0, 00:20:03.710 "rw_mbytes_per_sec": 0, 00:20:03.710 "r_mbytes_per_sec": 0, 00:20:03.710 "w_mbytes_per_sec": 0 00:20:03.710 }, 00:20:03.710 "claimed": true, 00:20:03.710 "claim_type": "exclusive_write", 00:20:03.710 "zoned": false, 00:20:03.710 "supported_io_types": { 00:20:03.710 "read": true, 00:20:03.710 "write": true, 00:20:03.710 "unmap": true, 00:20:03.710 "flush": true, 00:20:03.710 "reset": true, 00:20:03.710 "nvme_admin": false, 00:20:03.710 "nvme_io": false, 00:20:03.710 "nvme_io_md": false, 00:20:03.710 "write_zeroes": true, 00:20:03.710 "zcopy": true, 00:20:03.710 "get_zone_info": false, 00:20:03.710 "zone_management": false, 00:20:03.710 "zone_append": false, 00:20:03.710 "compare": false, 00:20:03.710 "compare_and_write": false, 00:20:03.710 "abort": true, 00:20:03.711 "seek_hole": false, 00:20:03.711 "seek_data": false, 00:20:03.711 "copy": true, 00:20:03.711 "nvme_iov_md": false 00:20:03.711 }, 00:20:03.711 "memory_domains": [ 00:20:03.711 { 00:20:03.711 "dma_device_id": "system", 00:20:03.711 "dma_device_type": 1 00:20:03.711 }, 00:20:03.711 { 00:20:03.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.711 "dma_device_type": 2 00:20:03.711 } 00:20:03.711 ], 00:20:03.711 "driver_specific": {} 00:20:03.711 } 00:20:03.711 ] 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.711 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.970 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:03.970 "name": "Existed_Raid", 00:20:03.970 "uuid": "319caaef-bd04-4bdd-bc08-3b090756f824", 00:20:03.970 "strip_size_kb": 64, 00:20:03.970 "state": "online", 00:20:03.970 "raid_level": "concat", 00:20:03.970 "superblock": true, 00:20:03.970 "num_base_bdevs": 4, 00:20:03.970 "num_base_bdevs_discovered": 4, 00:20:03.970 "num_base_bdevs_operational": 4, 00:20:03.970 "base_bdevs_list": [ 00:20:03.970 { 00:20:03.970 "name": "BaseBdev1", 00:20:03.970 "uuid": "aeced770-0222-4aa9-bb33-a6dd51118c55", 00:20:03.970 "is_configured": true, 00:20:03.970 "data_offset": 2048, 00:20:03.970 "data_size": 63488 00:20:03.970 }, 00:20:03.970 { 00:20:03.970 "name": "BaseBdev2", 00:20:03.970 "uuid": "aabc87db-c64a-4c4c-b221-a957ddec768b", 00:20:03.970 "is_configured": true, 00:20:03.970 "data_offset": 2048, 00:20:03.970 "data_size": 63488 00:20:03.970 }, 00:20:03.970 { 00:20:03.970 "name": "BaseBdev3", 00:20:03.970 "uuid": "98096b2a-072a-4c52-bb56-42976c178acb", 00:20:03.970 "is_configured": true, 00:20:03.970 "data_offset": 2048, 00:20:03.970 "data_size": 63488 00:20:03.970 }, 00:20:03.970 { 00:20:03.970 "name": "BaseBdev4", 00:20:03.970 "uuid": "f7480b47-2df3-41d5-8ae8-392f2c55ec27", 00:20:03.970 "is_configured": true, 00:20:03.970 "data_offset": 2048, 00:20:03.970 "data_size": 63488 00:20:03.970 } 00:20:03.970 ] 00:20:03.970 }' 00:20:03.970 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:03.971 13:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.537 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:04.537 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:04.537 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:04.537 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:04.537 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:04.537 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:04.537 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:04.537 13:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:04.796 [2024-07-25 13:20:15.033477] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.796 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:04.796 "name": "Existed_Raid", 00:20:04.796 "aliases": [ 00:20:04.796 "319caaef-bd04-4bdd-bc08-3b090756f824" 00:20:04.796 ], 00:20:04.796 "product_name": "Raid Volume", 00:20:04.796 "block_size": 512, 00:20:04.796 "num_blocks": 253952, 00:20:04.796 "uuid": "319caaef-bd04-4bdd-bc08-3b090756f824", 00:20:04.796 "assigned_rate_limits": { 00:20:04.796 "rw_ios_per_sec": 0, 00:20:04.796 "rw_mbytes_per_sec": 0, 00:20:04.796 "r_mbytes_per_sec": 0, 00:20:04.796 "w_mbytes_per_sec": 0 00:20:04.796 }, 00:20:04.796 "claimed": false, 00:20:04.796 "zoned": false, 00:20:04.796 "supported_io_types": { 00:20:04.796 "read": true, 00:20:04.796 "write": true, 00:20:04.796 "unmap": true, 00:20:04.796 "flush": true, 00:20:04.796 "reset": true, 00:20:04.796 "nvme_admin": false, 00:20:04.796 "nvme_io": false, 00:20:04.796 "nvme_io_md": false, 00:20:04.796 "write_zeroes": true, 00:20:04.796 "zcopy": false, 00:20:04.796 "get_zone_info": false, 00:20:04.796 "zone_management": false, 00:20:04.796 "zone_append": false, 00:20:04.796 "compare": false, 00:20:04.796 "compare_and_write": false, 00:20:04.796 "abort": false, 00:20:04.796 "seek_hole": false, 00:20:04.796 "seek_data": false, 00:20:04.796 "copy": false, 00:20:04.796 "nvme_iov_md": false 00:20:04.796 }, 00:20:04.796 "memory_domains": [ 00:20:04.796 { 00:20:04.796 "dma_device_id": "system", 00:20:04.796 "dma_device_type": 1 00:20:04.796 }, 00:20:04.796 { 00:20:04.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.796 "dma_device_type": 2 00:20:04.796 }, 00:20:04.796 { 00:20:04.796 "dma_device_id": "system", 00:20:04.796 "dma_device_type": 1 00:20:04.796 }, 00:20:04.796 { 00:20:04.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.796 "dma_device_type": 2 00:20:04.796 }, 00:20:04.796 { 00:20:04.796 "dma_device_id": "system", 00:20:04.796 "dma_device_type": 1 00:20:04.796 }, 00:20:04.796 { 00:20:04.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.796 "dma_device_type": 2 00:20:04.796 }, 00:20:04.796 { 00:20:04.796 "dma_device_id": "system", 00:20:04.796 "dma_device_type": 1 00:20:04.796 }, 00:20:04.796 { 00:20:04.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.796 "dma_device_type": 2 00:20:04.796 } 00:20:04.796 ], 00:20:04.796 "driver_specific": { 00:20:04.796 "raid": { 00:20:04.796 "uuid": "319caaef-bd04-4bdd-bc08-3b090756f824", 00:20:04.796 "strip_size_kb": 64, 00:20:04.796 "state": "online", 00:20:04.796 "raid_level": "concat", 00:20:04.796 "superblock": true, 00:20:04.796 "num_base_bdevs": 4, 00:20:04.796 "num_base_bdevs_discovered": 4, 00:20:04.796 "num_base_bdevs_operational": 4, 00:20:04.796 "base_bdevs_list": [ 00:20:04.796 { 00:20:04.796 "name": "BaseBdev1", 00:20:04.796 "uuid": "aeced770-0222-4aa9-bb33-a6dd51118c55", 00:20:04.796 "is_configured": true, 00:20:04.796 "data_offset": 2048, 00:20:04.796 "data_size": 63488 00:20:04.796 }, 00:20:04.796 { 00:20:04.796 "name": "BaseBdev2", 00:20:04.796 "uuid": "aabc87db-c64a-4c4c-b221-a957ddec768b", 00:20:04.796 "is_configured": true, 00:20:04.796 "data_offset": 2048, 00:20:04.796 "data_size": 63488 00:20:04.796 }, 00:20:04.796 { 00:20:04.796 "name": "BaseBdev3", 00:20:04.796 "uuid": "98096b2a-072a-4c52-bb56-42976c178acb", 00:20:04.796 "is_configured": true, 00:20:04.796 "data_offset": 2048, 00:20:04.796 "data_size": 63488 00:20:04.796 }, 00:20:04.796 { 00:20:04.796 "name": "BaseBdev4", 00:20:04.796 "uuid": "f7480b47-2df3-41d5-8ae8-392f2c55ec27", 00:20:04.796 "is_configured": true, 00:20:04.796 "data_offset": 2048, 00:20:04.796 "data_size": 63488 00:20:04.796 } 00:20:04.796 ] 00:20:04.796 } 00:20:04.796 } 00:20:04.796 }' 00:20:04.796 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:04.796 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:04.796 BaseBdev2 00:20:04.796 BaseBdev3 00:20:04.796 BaseBdev4' 00:20:04.796 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:04.796 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:04.796 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:05.055 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:05.055 "name": "BaseBdev1", 00:20:05.055 "aliases": [ 00:20:05.055 "aeced770-0222-4aa9-bb33-a6dd51118c55" 00:20:05.055 ], 00:20:05.055 "product_name": "Malloc disk", 00:20:05.055 "block_size": 512, 00:20:05.055 "num_blocks": 65536, 00:20:05.055 "uuid": "aeced770-0222-4aa9-bb33-a6dd51118c55", 00:20:05.055 "assigned_rate_limits": { 00:20:05.055 "rw_ios_per_sec": 0, 00:20:05.055 "rw_mbytes_per_sec": 0, 00:20:05.055 "r_mbytes_per_sec": 0, 00:20:05.055 "w_mbytes_per_sec": 0 00:20:05.055 }, 00:20:05.055 "claimed": true, 00:20:05.055 "claim_type": "exclusive_write", 00:20:05.055 "zoned": false, 00:20:05.055 "supported_io_types": { 00:20:05.055 "read": true, 00:20:05.055 "write": true, 00:20:05.055 "unmap": true, 00:20:05.055 "flush": true, 00:20:05.055 "reset": true, 00:20:05.055 "nvme_admin": false, 00:20:05.055 "nvme_io": false, 00:20:05.055 "nvme_io_md": false, 00:20:05.055 "write_zeroes": true, 00:20:05.055 "zcopy": true, 00:20:05.055 "get_zone_info": false, 00:20:05.055 "zone_management": false, 00:20:05.055 "zone_append": false, 00:20:05.055 "compare": false, 00:20:05.055 "compare_and_write": false, 00:20:05.055 "abort": true, 00:20:05.055 "seek_hole": false, 00:20:05.055 "seek_data": false, 00:20:05.055 "copy": true, 00:20:05.055 "nvme_iov_md": false 00:20:05.055 }, 00:20:05.055 "memory_domains": [ 00:20:05.055 { 00:20:05.055 "dma_device_id": "system", 00:20:05.055 "dma_device_type": 1 00:20:05.055 }, 00:20:05.055 { 00:20:05.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.055 "dma_device_type": 2 00:20:05.055 } 00:20:05.055 ], 00:20:05.055 "driver_specific": {} 00:20:05.055 }' 00:20:05.055 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.055 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.055 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:05.055 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.055 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.055 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:05.055 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.314 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.314 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.314 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.314 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.314 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:05.314 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:05.314 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:05.314 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:05.574 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:05.574 "name": "BaseBdev2", 00:20:05.574 "aliases": [ 00:20:05.574 "aabc87db-c64a-4c4c-b221-a957ddec768b" 00:20:05.574 ], 00:20:05.574 "product_name": "Malloc disk", 00:20:05.574 "block_size": 512, 00:20:05.574 "num_blocks": 65536, 00:20:05.574 "uuid": "aabc87db-c64a-4c4c-b221-a957ddec768b", 00:20:05.574 "assigned_rate_limits": { 00:20:05.574 "rw_ios_per_sec": 0, 00:20:05.574 "rw_mbytes_per_sec": 0, 00:20:05.574 "r_mbytes_per_sec": 0, 00:20:05.574 "w_mbytes_per_sec": 0 00:20:05.574 }, 00:20:05.574 "claimed": true, 00:20:05.574 "claim_type": "exclusive_write", 00:20:05.574 "zoned": false, 00:20:05.574 "supported_io_types": { 00:20:05.574 "read": true, 00:20:05.574 "write": true, 00:20:05.574 "unmap": true, 00:20:05.574 "flush": true, 00:20:05.574 "reset": true, 00:20:05.574 "nvme_admin": false, 00:20:05.574 "nvme_io": false, 00:20:05.574 "nvme_io_md": false, 00:20:05.574 "write_zeroes": true, 00:20:05.574 "zcopy": true, 00:20:05.574 "get_zone_info": false, 00:20:05.574 "zone_management": false, 00:20:05.574 "zone_append": false, 00:20:05.574 "compare": false, 00:20:05.574 "compare_and_write": false, 00:20:05.574 "abort": true, 00:20:05.574 "seek_hole": false, 00:20:05.574 "seek_data": false, 00:20:05.574 "copy": true, 00:20:05.574 "nvme_iov_md": false 00:20:05.574 }, 00:20:05.574 "memory_domains": [ 00:20:05.574 { 00:20:05.574 "dma_device_id": "system", 00:20:05.574 "dma_device_type": 1 00:20:05.574 }, 00:20:05.574 { 00:20:05.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.574 "dma_device_type": 2 00:20:05.574 } 00:20:05.574 ], 00:20:05.574 "driver_specific": {} 00:20:05.574 }' 00:20:05.574 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.574 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.574 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:05.574 13:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.574 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.833 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:05.833 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.833 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.833 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.833 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.833 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.833 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:05.833 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:05.833 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:05.833 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:06.092 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:06.092 "name": "BaseBdev3", 00:20:06.092 "aliases": [ 00:20:06.092 "98096b2a-072a-4c52-bb56-42976c178acb" 00:20:06.092 ], 00:20:06.092 "product_name": "Malloc disk", 00:20:06.092 "block_size": 512, 00:20:06.092 "num_blocks": 65536, 00:20:06.093 "uuid": "98096b2a-072a-4c52-bb56-42976c178acb", 00:20:06.093 "assigned_rate_limits": { 00:20:06.093 "rw_ios_per_sec": 0, 00:20:06.093 "rw_mbytes_per_sec": 0, 00:20:06.093 "r_mbytes_per_sec": 0, 00:20:06.093 "w_mbytes_per_sec": 0 00:20:06.093 }, 00:20:06.093 "claimed": true, 00:20:06.093 "claim_type": "exclusive_write", 00:20:06.093 "zoned": false, 00:20:06.093 "supported_io_types": { 00:20:06.093 "read": true, 00:20:06.093 "write": true, 00:20:06.093 "unmap": true, 00:20:06.093 "flush": true, 00:20:06.093 "reset": true, 00:20:06.093 "nvme_admin": false, 00:20:06.093 "nvme_io": false, 00:20:06.093 "nvme_io_md": false, 00:20:06.093 "write_zeroes": true, 00:20:06.093 "zcopy": true, 00:20:06.093 "get_zone_info": false, 00:20:06.093 "zone_management": false, 00:20:06.093 "zone_append": false, 00:20:06.093 "compare": false, 00:20:06.093 "compare_and_write": false, 00:20:06.093 "abort": true, 00:20:06.093 "seek_hole": false, 00:20:06.093 "seek_data": false, 00:20:06.093 "copy": true, 00:20:06.093 "nvme_iov_md": false 00:20:06.093 }, 00:20:06.093 "memory_domains": [ 00:20:06.093 { 00:20:06.093 "dma_device_id": "system", 00:20:06.093 "dma_device_type": 1 00:20:06.093 }, 00:20:06.093 { 00:20:06.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.093 "dma_device_type": 2 00:20:06.093 } 00:20:06.093 ], 00:20:06.093 "driver_specific": {} 00:20:06.093 }' 00:20:06.093 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:06.093 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:06.093 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:06.093 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:06.093 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:06.352 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:06.352 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:06.352 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:06.352 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:06.352 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:06.352 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:06.352 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:06.352 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:06.352 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:06.352 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:06.610 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:06.610 "name": "BaseBdev4", 00:20:06.610 "aliases": [ 00:20:06.610 "f7480b47-2df3-41d5-8ae8-392f2c55ec27" 00:20:06.610 ], 00:20:06.610 "product_name": "Malloc disk", 00:20:06.610 "block_size": 512, 00:20:06.610 "num_blocks": 65536, 00:20:06.610 "uuid": "f7480b47-2df3-41d5-8ae8-392f2c55ec27", 00:20:06.610 "assigned_rate_limits": { 00:20:06.610 "rw_ios_per_sec": 0, 00:20:06.610 "rw_mbytes_per_sec": 0, 00:20:06.610 "r_mbytes_per_sec": 0, 00:20:06.610 "w_mbytes_per_sec": 0 00:20:06.610 }, 00:20:06.610 "claimed": true, 00:20:06.610 "claim_type": "exclusive_write", 00:20:06.610 "zoned": false, 00:20:06.610 "supported_io_types": { 00:20:06.610 "read": true, 00:20:06.610 "write": true, 00:20:06.610 "unmap": true, 00:20:06.610 "flush": true, 00:20:06.610 "reset": true, 00:20:06.610 "nvme_admin": false, 00:20:06.610 "nvme_io": false, 00:20:06.610 "nvme_io_md": false, 00:20:06.610 "write_zeroes": true, 00:20:06.610 "zcopy": true, 00:20:06.610 "get_zone_info": false, 00:20:06.610 "zone_management": false, 00:20:06.610 "zone_append": false, 00:20:06.610 "compare": false, 00:20:06.610 "compare_and_write": false, 00:20:06.610 "abort": true, 00:20:06.610 "seek_hole": false, 00:20:06.610 "seek_data": false, 00:20:06.610 "copy": true, 00:20:06.610 "nvme_iov_md": false 00:20:06.610 }, 00:20:06.610 "memory_domains": [ 00:20:06.610 { 00:20:06.610 "dma_device_id": "system", 00:20:06.610 "dma_device_type": 1 00:20:06.610 }, 00:20:06.610 { 00:20:06.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.610 "dma_device_type": 2 00:20:06.610 } 00:20:06.610 ], 00:20:06.610 "driver_specific": {} 00:20:06.610 }' 00:20:06.610 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:06.610 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:06.610 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:06.610 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:06.610 13:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:06.610 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:06.610 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:06.610 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:06.869 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:06.869 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:06.869 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:06.869 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:06.869 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:06.869 [2024-07-25 13:20:17.355325] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:06.869 [2024-07-25 13:20:17.355348] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.869 [2024-07-25 13:20:17.355388] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.128 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.128 "name": "Existed_Raid", 00:20:07.128 "uuid": "319caaef-bd04-4bdd-bc08-3b090756f824", 00:20:07.128 "strip_size_kb": 64, 00:20:07.128 "state": "offline", 00:20:07.128 "raid_level": "concat", 00:20:07.128 "superblock": true, 00:20:07.128 "num_base_bdevs": 4, 00:20:07.128 "num_base_bdevs_discovered": 3, 00:20:07.128 "num_base_bdevs_operational": 3, 00:20:07.128 "base_bdevs_list": [ 00:20:07.128 { 00:20:07.128 "name": null, 00:20:07.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.128 "is_configured": false, 00:20:07.128 "data_offset": 2048, 00:20:07.128 "data_size": 63488 00:20:07.128 }, 00:20:07.128 { 00:20:07.128 "name": "BaseBdev2", 00:20:07.128 "uuid": "aabc87db-c64a-4c4c-b221-a957ddec768b", 00:20:07.128 "is_configured": true, 00:20:07.128 "data_offset": 2048, 00:20:07.128 "data_size": 63488 00:20:07.128 }, 00:20:07.128 { 00:20:07.128 "name": "BaseBdev3", 00:20:07.128 "uuid": "98096b2a-072a-4c52-bb56-42976c178acb", 00:20:07.129 "is_configured": true, 00:20:07.129 "data_offset": 2048, 00:20:07.129 "data_size": 63488 00:20:07.129 }, 00:20:07.129 { 00:20:07.129 "name": "BaseBdev4", 00:20:07.129 "uuid": "f7480b47-2df3-41d5-8ae8-392f2c55ec27", 00:20:07.129 "is_configured": true, 00:20:07.129 "data_offset": 2048, 00:20:07.129 "data_size": 63488 00:20:07.129 } 00:20:07.129 ] 00:20:07.129 }' 00:20:07.129 13:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.129 13:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.696 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:07.696 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:07.954 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.954 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:07.954 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:07.954 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:07.954 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:08.212 [2024-07-25 13:20:18.619705] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:08.212 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:08.212 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:08.212 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.212 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:08.471 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:08.472 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:08.472 13:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:09.093 [2024-07-25 13:20:19.371531] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:09.093 13:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:09.093 13:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:09.093 13:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.093 13:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:09.352 13:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:09.352 13:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:09.352 13:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:09.611 [2024-07-25 13:20:19.842850] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:09.611 [2024-07-25 13:20:19.842886] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fb2840 name Existed_Raid, state offline 00:20:09.611 13:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:09.611 13:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:09.611 13:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.611 13:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:09.611 13:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:09.611 13:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:09.611 13:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:20:09.611 13:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:09.611 13:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:09.611 13:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:09.870 BaseBdev2 00:20:09.870 13:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:09.870 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:09.870 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:09.870 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:09.870 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:09.870 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:09.870 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:10.129 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:10.389 [ 00:20:10.389 { 00:20:10.389 "name": "BaseBdev2", 00:20:10.389 "aliases": [ 00:20:10.389 "833243da-d875-4000-a68a-8f5276c5f1f2" 00:20:10.389 ], 00:20:10.389 "product_name": "Malloc disk", 00:20:10.389 "block_size": 512, 00:20:10.389 "num_blocks": 65536, 00:20:10.389 "uuid": "833243da-d875-4000-a68a-8f5276c5f1f2", 00:20:10.389 "assigned_rate_limits": { 00:20:10.389 "rw_ios_per_sec": 0, 00:20:10.389 "rw_mbytes_per_sec": 0, 00:20:10.389 "r_mbytes_per_sec": 0, 00:20:10.389 "w_mbytes_per_sec": 0 00:20:10.389 }, 00:20:10.389 "claimed": false, 00:20:10.389 "zoned": false, 00:20:10.389 "supported_io_types": { 00:20:10.389 "read": true, 00:20:10.389 "write": true, 00:20:10.389 "unmap": true, 00:20:10.389 "flush": true, 00:20:10.389 "reset": true, 00:20:10.389 "nvme_admin": false, 00:20:10.389 "nvme_io": false, 00:20:10.389 "nvme_io_md": false, 00:20:10.389 "write_zeroes": true, 00:20:10.389 "zcopy": true, 00:20:10.389 "get_zone_info": false, 00:20:10.389 "zone_management": false, 00:20:10.389 "zone_append": false, 00:20:10.389 "compare": false, 00:20:10.389 "compare_and_write": false, 00:20:10.389 "abort": true, 00:20:10.389 "seek_hole": false, 00:20:10.389 "seek_data": false, 00:20:10.389 "copy": true, 00:20:10.389 "nvme_iov_md": false 00:20:10.389 }, 00:20:10.389 "memory_domains": [ 00:20:10.389 { 00:20:10.389 "dma_device_id": "system", 00:20:10.389 "dma_device_type": 1 00:20:10.389 }, 00:20:10.389 { 00:20:10.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.389 "dma_device_type": 2 00:20:10.389 } 00:20:10.389 ], 00:20:10.389 "driver_specific": {} 00:20:10.389 } 00:20:10.389 ] 00:20:10.389 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:10.389 13:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:10.389 13:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:10.389 13:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:10.648 BaseBdev3 00:20:10.648 13:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:10.648 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:10.648 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:10.648 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:10.648 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:10.648 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:10.648 13:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:10.908 13:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:10.908 [ 00:20:10.908 { 00:20:10.908 "name": "BaseBdev3", 00:20:10.908 "aliases": [ 00:20:10.908 "63cebcd5-7189-48b5-8b96-d3d69a351a51" 00:20:10.908 ], 00:20:10.908 "product_name": "Malloc disk", 00:20:10.908 "block_size": 512, 00:20:10.908 "num_blocks": 65536, 00:20:10.908 "uuid": "63cebcd5-7189-48b5-8b96-d3d69a351a51", 00:20:10.908 "assigned_rate_limits": { 00:20:10.908 "rw_ios_per_sec": 0, 00:20:10.908 "rw_mbytes_per_sec": 0, 00:20:10.908 "r_mbytes_per_sec": 0, 00:20:10.908 "w_mbytes_per_sec": 0 00:20:10.908 }, 00:20:10.908 "claimed": false, 00:20:10.908 "zoned": false, 00:20:10.908 "supported_io_types": { 00:20:10.908 "read": true, 00:20:10.908 "write": true, 00:20:10.908 "unmap": true, 00:20:10.908 "flush": true, 00:20:10.908 "reset": true, 00:20:10.908 "nvme_admin": false, 00:20:10.908 "nvme_io": false, 00:20:10.908 "nvme_io_md": false, 00:20:10.908 "write_zeroes": true, 00:20:10.908 "zcopy": true, 00:20:10.908 "get_zone_info": false, 00:20:10.908 "zone_management": false, 00:20:10.908 "zone_append": false, 00:20:10.908 "compare": false, 00:20:10.908 "compare_and_write": false, 00:20:10.908 "abort": true, 00:20:10.908 "seek_hole": false, 00:20:10.908 "seek_data": false, 00:20:10.908 "copy": true, 00:20:10.908 "nvme_iov_md": false 00:20:10.908 }, 00:20:10.908 "memory_domains": [ 00:20:10.908 { 00:20:10.908 "dma_device_id": "system", 00:20:10.908 "dma_device_type": 1 00:20:10.908 }, 00:20:10.908 { 00:20:10.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.908 "dma_device_type": 2 00:20:10.908 } 00:20:10.908 ], 00:20:10.908 "driver_specific": {} 00:20:10.908 } 00:20:10.908 ] 00:20:10.908 13:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:10.908 13:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:10.908 13:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:10.908 13:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:11.168 BaseBdev4 00:20:11.168 13:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:20:11.168 13:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:11.168 13:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:11.168 13:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:11.168 13:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:11.168 13:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:11.168 13:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:11.427 13:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:11.686 [ 00:20:11.686 { 00:20:11.686 "name": "BaseBdev4", 00:20:11.686 "aliases": [ 00:20:11.686 "03d32be9-755f-46e4-869d-3eb24ab5431c" 00:20:11.686 ], 00:20:11.686 "product_name": "Malloc disk", 00:20:11.686 "block_size": 512, 00:20:11.686 "num_blocks": 65536, 00:20:11.686 "uuid": "03d32be9-755f-46e4-869d-3eb24ab5431c", 00:20:11.686 "assigned_rate_limits": { 00:20:11.686 "rw_ios_per_sec": 0, 00:20:11.686 "rw_mbytes_per_sec": 0, 00:20:11.686 "r_mbytes_per_sec": 0, 00:20:11.686 "w_mbytes_per_sec": 0 00:20:11.686 }, 00:20:11.686 "claimed": false, 00:20:11.686 "zoned": false, 00:20:11.686 "supported_io_types": { 00:20:11.686 "read": true, 00:20:11.686 "write": true, 00:20:11.686 "unmap": true, 00:20:11.686 "flush": true, 00:20:11.686 "reset": true, 00:20:11.686 "nvme_admin": false, 00:20:11.686 "nvme_io": false, 00:20:11.686 "nvme_io_md": false, 00:20:11.686 "write_zeroes": true, 00:20:11.686 "zcopy": true, 00:20:11.686 "get_zone_info": false, 00:20:11.686 "zone_management": false, 00:20:11.686 "zone_append": false, 00:20:11.686 "compare": false, 00:20:11.686 "compare_and_write": false, 00:20:11.686 "abort": true, 00:20:11.686 "seek_hole": false, 00:20:11.686 "seek_data": false, 00:20:11.686 "copy": true, 00:20:11.686 "nvme_iov_md": false 00:20:11.686 }, 00:20:11.686 "memory_domains": [ 00:20:11.686 { 00:20:11.686 "dma_device_id": "system", 00:20:11.686 "dma_device_type": 1 00:20:11.686 }, 00:20:11.686 { 00:20:11.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.686 "dma_device_type": 2 00:20:11.686 } 00:20:11.686 ], 00:20:11.686 "driver_specific": {} 00:20:11.686 } 00:20:11.686 ] 00:20:11.686 13:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:11.686 13:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:11.686 13:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:11.686 13:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:11.686 [2024-07-25 13:20:22.163593] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:11.686 [2024-07-25 13:20:22.163632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:11.686 [2024-07-25 13:20:22.163649] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:11.686 [2024-07-25 13:20:22.164878] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:11.687 [2024-07-25 13:20:22.164917] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.946 "name": "Existed_Raid", 00:20:11.946 "uuid": "08cbbda5-1764-45aa-863e-2eef040c558a", 00:20:11.946 "strip_size_kb": 64, 00:20:11.946 "state": "configuring", 00:20:11.946 "raid_level": "concat", 00:20:11.946 "superblock": true, 00:20:11.946 "num_base_bdevs": 4, 00:20:11.946 "num_base_bdevs_discovered": 3, 00:20:11.946 "num_base_bdevs_operational": 4, 00:20:11.946 "base_bdevs_list": [ 00:20:11.946 { 00:20:11.946 "name": "BaseBdev1", 00:20:11.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.946 "is_configured": false, 00:20:11.946 "data_offset": 0, 00:20:11.946 "data_size": 0 00:20:11.946 }, 00:20:11.946 { 00:20:11.946 "name": "BaseBdev2", 00:20:11.946 "uuid": "833243da-d875-4000-a68a-8f5276c5f1f2", 00:20:11.946 "is_configured": true, 00:20:11.946 "data_offset": 2048, 00:20:11.946 "data_size": 63488 00:20:11.946 }, 00:20:11.946 { 00:20:11.946 "name": "BaseBdev3", 00:20:11.946 "uuid": "63cebcd5-7189-48b5-8b96-d3d69a351a51", 00:20:11.946 "is_configured": true, 00:20:11.946 "data_offset": 2048, 00:20:11.946 "data_size": 63488 00:20:11.946 }, 00:20:11.946 { 00:20:11.946 "name": "BaseBdev4", 00:20:11.946 "uuid": "03d32be9-755f-46e4-869d-3eb24ab5431c", 00:20:11.946 "is_configured": true, 00:20:11.946 "data_offset": 2048, 00:20:11.946 "data_size": 63488 00:20:11.946 } 00:20:11.946 ] 00:20:11.946 }' 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.946 13:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.515 13:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:12.774 [2024-07-25 13:20:23.206314] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:12.774 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:12.774 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:12.774 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:12.774 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:12.774 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:12.774 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:12.774 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.774 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.774 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.774 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.774 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.774 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.036 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.036 "name": "Existed_Raid", 00:20:13.036 "uuid": "08cbbda5-1764-45aa-863e-2eef040c558a", 00:20:13.036 "strip_size_kb": 64, 00:20:13.036 "state": "configuring", 00:20:13.036 "raid_level": "concat", 00:20:13.036 "superblock": true, 00:20:13.036 "num_base_bdevs": 4, 00:20:13.036 "num_base_bdevs_discovered": 2, 00:20:13.036 "num_base_bdevs_operational": 4, 00:20:13.036 "base_bdevs_list": [ 00:20:13.036 { 00:20:13.036 "name": "BaseBdev1", 00:20:13.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.036 "is_configured": false, 00:20:13.036 "data_offset": 0, 00:20:13.036 "data_size": 0 00:20:13.036 }, 00:20:13.036 { 00:20:13.036 "name": null, 00:20:13.036 "uuid": "833243da-d875-4000-a68a-8f5276c5f1f2", 00:20:13.036 "is_configured": false, 00:20:13.036 "data_offset": 2048, 00:20:13.036 "data_size": 63488 00:20:13.036 }, 00:20:13.036 { 00:20:13.036 "name": "BaseBdev3", 00:20:13.036 "uuid": "63cebcd5-7189-48b5-8b96-d3d69a351a51", 00:20:13.036 "is_configured": true, 00:20:13.036 "data_offset": 2048, 00:20:13.036 "data_size": 63488 00:20:13.036 }, 00:20:13.036 { 00:20:13.036 "name": "BaseBdev4", 00:20:13.036 "uuid": "03d32be9-755f-46e4-869d-3eb24ab5431c", 00:20:13.036 "is_configured": true, 00:20:13.036 "data_offset": 2048, 00:20:13.036 "data_size": 63488 00:20:13.036 } 00:20:13.036 ] 00:20:13.036 }' 00:20:13.036 13:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.036 13:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.605 13:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.605 13:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:13.864 13:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:13.864 13:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:14.123 [2024-07-25 13:20:24.472730] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:14.123 BaseBdev1 00:20:14.123 13:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:14.123 13:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:14.123 13:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:14.123 13:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:14.123 13:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:14.123 13:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:14.123 13:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:14.382 13:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:14.951 [ 00:20:14.951 { 00:20:14.951 "name": "BaseBdev1", 00:20:14.951 "aliases": [ 00:20:14.951 "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834" 00:20:14.951 ], 00:20:14.951 "product_name": "Malloc disk", 00:20:14.951 "block_size": 512, 00:20:14.951 "num_blocks": 65536, 00:20:14.951 "uuid": "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834", 00:20:14.951 "assigned_rate_limits": { 00:20:14.951 "rw_ios_per_sec": 0, 00:20:14.951 "rw_mbytes_per_sec": 0, 00:20:14.951 "r_mbytes_per_sec": 0, 00:20:14.951 "w_mbytes_per_sec": 0 00:20:14.951 }, 00:20:14.951 "claimed": true, 00:20:14.951 "claim_type": "exclusive_write", 00:20:14.951 "zoned": false, 00:20:14.951 "supported_io_types": { 00:20:14.951 "read": true, 00:20:14.951 "write": true, 00:20:14.951 "unmap": true, 00:20:14.951 "flush": true, 00:20:14.951 "reset": true, 00:20:14.951 "nvme_admin": false, 00:20:14.951 "nvme_io": false, 00:20:14.951 "nvme_io_md": false, 00:20:14.951 "write_zeroes": true, 00:20:14.951 "zcopy": true, 00:20:14.951 "get_zone_info": false, 00:20:14.951 "zone_management": false, 00:20:14.951 "zone_append": false, 00:20:14.951 "compare": false, 00:20:14.951 "compare_and_write": false, 00:20:14.951 "abort": true, 00:20:14.951 "seek_hole": false, 00:20:14.951 "seek_data": false, 00:20:14.951 "copy": true, 00:20:14.951 "nvme_iov_md": false 00:20:14.951 }, 00:20:14.951 "memory_domains": [ 00:20:14.951 { 00:20:14.951 "dma_device_id": "system", 00:20:14.951 "dma_device_type": 1 00:20:14.951 }, 00:20:14.951 { 00:20:14.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.951 "dma_device_type": 2 00:20:14.951 } 00:20:14.951 ], 00:20:14.951 "driver_specific": {} 00:20:14.951 } 00:20:14.951 ] 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.951 "name": "Existed_Raid", 00:20:14.951 "uuid": "08cbbda5-1764-45aa-863e-2eef040c558a", 00:20:14.951 "strip_size_kb": 64, 00:20:14.951 "state": "configuring", 00:20:14.951 "raid_level": "concat", 00:20:14.951 "superblock": true, 00:20:14.951 "num_base_bdevs": 4, 00:20:14.951 "num_base_bdevs_discovered": 3, 00:20:14.951 "num_base_bdevs_operational": 4, 00:20:14.951 "base_bdevs_list": [ 00:20:14.951 { 00:20:14.951 "name": "BaseBdev1", 00:20:14.951 "uuid": "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834", 00:20:14.951 "is_configured": true, 00:20:14.951 "data_offset": 2048, 00:20:14.951 "data_size": 63488 00:20:14.951 }, 00:20:14.951 { 00:20:14.951 "name": null, 00:20:14.951 "uuid": "833243da-d875-4000-a68a-8f5276c5f1f2", 00:20:14.951 "is_configured": false, 00:20:14.951 "data_offset": 2048, 00:20:14.951 "data_size": 63488 00:20:14.951 }, 00:20:14.951 { 00:20:14.951 "name": "BaseBdev3", 00:20:14.951 "uuid": "63cebcd5-7189-48b5-8b96-d3d69a351a51", 00:20:14.951 "is_configured": true, 00:20:14.951 "data_offset": 2048, 00:20:14.951 "data_size": 63488 00:20:14.951 }, 00:20:14.951 { 00:20:14.951 "name": "BaseBdev4", 00:20:14.951 "uuid": "03d32be9-755f-46e4-869d-3eb24ab5431c", 00:20:14.951 "is_configured": true, 00:20:14.951 "data_offset": 2048, 00:20:14.951 "data_size": 63488 00:20:14.951 } 00:20:14.951 ] 00:20:14.951 }' 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.951 13:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.520 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.520 13:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:15.779 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:15.779 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:16.039 [2024-07-25 13:20:26.429893] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:16.039 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:16.039 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.039 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:16.039 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:16.039 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:16.039 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:16.039 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.039 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.039 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.039 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.039 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.039 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.299 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.299 "name": "Existed_Raid", 00:20:16.299 "uuid": "08cbbda5-1764-45aa-863e-2eef040c558a", 00:20:16.299 "strip_size_kb": 64, 00:20:16.299 "state": "configuring", 00:20:16.299 "raid_level": "concat", 00:20:16.299 "superblock": true, 00:20:16.299 "num_base_bdevs": 4, 00:20:16.299 "num_base_bdevs_discovered": 2, 00:20:16.299 "num_base_bdevs_operational": 4, 00:20:16.299 "base_bdevs_list": [ 00:20:16.299 { 00:20:16.299 "name": "BaseBdev1", 00:20:16.299 "uuid": "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834", 00:20:16.299 "is_configured": true, 00:20:16.299 "data_offset": 2048, 00:20:16.299 "data_size": 63488 00:20:16.299 }, 00:20:16.299 { 00:20:16.299 "name": null, 00:20:16.299 "uuid": "833243da-d875-4000-a68a-8f5276c5f1f2", 00:20:16.299 "is_configured": false, 00:20:16.299 "data_offset": 2048, 00:20:16.299 "data_size": 63488 00:20:16.299 }, 00:20:16.299 { 00:20:16.299 "name": null, 00:20:16.299 "uuid": "63cebcd5-7189-48b5-8b96-d3d69a351a51", 00:20:16.299 "is_configured": false, 00:20:16.299 "data_offset": 2048, 00:20:16.299 "data_size": 63488 00:20:16.299 }, 00:20:16.299 { 00:20:16.299 "name": "BaseBdev4", 00:20:16.299 "uuid": "03d32be9-755f-46e4-869d-3eb24ab5431c", 00:20:16.299 "is_configured": true, 00:20:16.299 "data_offset": 2048, 00:20:16.299 "data_size": 63488 00:20:16.299 } 00:20:16.299 ] 00:20:16.299 }' 00:20:16.299 13:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.299 13:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.868 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.868 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:17.127 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:17.127 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:17.387 [2024-07-25 13:20:27.709272] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:17.387 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:17.387 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:17.387 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:17.387 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:17.387 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:17.387 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:17.387 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:17.387 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:17.387 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:17.387 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:17.387 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.387 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.646 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:17.646 "name": "Existed_Raid", 00:20:17.646 "uuid": "08cbbda5-1764-45aa-863e-2eef040c558a", 00:20:17.646 "strip_size_kb": 64, 00:20:17.646 "state": "configuring", 00:20:17.646 "raid_level": "concat", 00:20:17.646 "superblock": true, 00:20:17.646 "num_base_bdevs": 4, 00:20:17.646 "num_base_bdevs_discovered": 3, 00:20:17.646 "num_base_bdevs_operational": 4, 00:20:17.646 "base_bdevs_list": [ 00:20:17.646 { 00:20:17.646 "name": "BaseBdev1", 00:20:17.646 "uuid": "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834", 00:20:17.646 "is_configured": true, 00:20:17.646 "data_offset": 2048, 00:20:17.646 "data_size": 63488 00:20:17.646 }, 00:20:17.646 { 00:20:17.646 "name": null, 00:20:17.646 "uuid": "833243da-d875-4000-a68a-8f5276c5f1f2", 00:20:17.646 "is_configured": false, 00:20:17.646 "data_offset": 2048, 00:20:17.646 "data_size": 63488 00:20:17.646 }, 00:20:17.646 { 00:20:17.646 "name": "BaseBdev3", 00:20:17.646 "uuid": "63cebcd5-7189-48b5-8b96-d3d69a351a51", 00:20:17.646 "is_configured": true, 00:20:17.646 "data_offset": 2048, 00:20:17.646 "data_size": 63488 00:20:17.646 }, 00:20:17.646 { 00:20:17.646 "name": "BaseBdev4", 00:20:17.646 "uuid": "03d32be9-755f-46e4-869d-3eb24ab5431c", 00:20:17.646 "is_configured": true, 00:20:17.646 "data_offset": 2048, 00:20:17.646 "data_size": 63488 00:20:17.646 } 00:20:17.646 ] 00:20:17.646 }' 00:20:17.646 13:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:17.646 13:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.584 13:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.584 13:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:18.844 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:18.844 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:19.103 [2024-07-25 13:20:29.530087] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:19.103 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:19.103 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:19.103 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:19.103 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:19.103 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:19.103 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:19.103 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.103 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.103 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.103 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.103 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.103 13:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.672 13:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.672 "name": "Existed_Raid", 00:20:19.672 "uuid": "08cbbda5-1764-45aa-863e-2eef040c558a", 00:20:19.672 "strip_size_kb": 64, 00:20:19.672 "state": "configuring", 00:20:19.672 "raid_level": "concat", 00:20:19.672 "superblock": true, 00:20:19.672 "num_base_bdevs": 4, 00:20:19.672 "num_base_bdevs_discovered": 2, 00:20:19.672 "num_base_bdevs_operational": 4, 00:20:19.672 "base_bdevs_list": [ 00:20:19.672 { 00:20:19.672 "name": null, 00:20:19.672 "uuid": "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834", 00:20:19.672 "is_configured": false, 00:20:19.672 "data_offset": 2048, 00:20:19.672 "data_size": 63488 00:20:19.672 }, 00:20:19.672 { 00:20:19.672 "name": null, 00:20:19.672 "uuid": "833243da-d875-4000-a68a-8f5276c5f1f2", 00:20:19.672 "is_configured": false, 00:20:19.672 "data_offset": 2048, 00:20:19.672 "data_size": 63488 00:20:19.672 }, 00:20:19.672 { 00:20:19.672 "name": "BaseBdev3", 00:20:19.672 "uuid": "63cebcd5-7189-48b5-8b96-d3d69a351a51", 00:20:19.672 "is_configured": true, 00:20:19.672 "data_offset": 2048, 00:20:19.672 "data_size": 63488 00:20:19.672 }, 00:20:19.672 { 00:20:19.672 "name": "BaseBdev4", 00:20:19.672 "uuid": "03d32be9-755f-46e4-869d-3eb24ab5431c", 00:20:19.672 "is_configured": true, 00:20:19.672 "data_offset": 2048, 00:20:19.672 "data_size": 63488 00:20:19.672 } 00:20:19.672 ] 00:20:19.672 }' 00:20:19.672 13:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.672 13:20:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.241 13:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:20.241 13:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.500 13:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:20.500 13:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:20.759 [2024-07-25 13:20:31.084095] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:20.759 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:20.759 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:20.759 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:20.759 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:20.759 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:20.759 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:20.759 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.759 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.759 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.759 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.759 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.759 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.017 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:21.017 "name": "Existed_Raid", 00:20:21.017 "uuid": "08cbbda5-1764-45aa-863e-2eef040c558a", 00:20:21.017 "strip_size_kb": 64, 00:20:21.017 "state": "configuring", 00:20:21.017 "raid_level": "concat", 00:20:21.017 "superblock": true, 00:20:21.017 "num_base_bdevs": 4, 00:20:21.017 "num_base_bdevs_discovered": 3, 00:20:21.017 "num_base_bdevs_operational": 4, 00:20:21.017 "base_bdevs_list": [ 00:20:21.017 { 00:20:21.017 "name": null, 00:20:21.017 "uuid": "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834", 00:20:21.017 "is_configured": false, 00:20:21.017 "data_offset": 2048, 00:20:21.017 "data_size": 63488 00:20:21.017 }, 00:20:21.017 { 00:20:21.017 "name": "BaseBdev2", 00:20:21.017 "uuid": "833243da-d875-4000-a68a-8f5276c5f1f2", 00:20:21.017 "is_configured": true, 00:20:21.017 "data_offset": 2048, 00:20:21.017 "data_size": 63488 00:20:21.017 }, 00:20:21.017 { 00:20:21.017 "name": "BaseBdev3", 00:20:21.017 "uuid": "63cebcd5-7189-48b5-8b96-d3d69a351a51", 00:20:21.017 "is_configured": true, 00:20:21.017 "data_offset": 2048, 00:20:21.017 "data_size": 63488 00:20:21.017 }, 00:20:21.017 { 00:20:21.017 "name": "BaseBdev4", 00:20:21.017 "uuid": "03d32be9-755f-46e4-869d-3eb24ab5431c", 00:20:21.017 "is_configured": true, 00:20:21.017 "data_offset": 2048, 00:20:21.017 "data_size": 63488 00:20:21.017 } 00:20:21.017 ] 00:20:21.017 }' 00:20:21.017 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:21.017 13:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.585 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.585 13:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:21.844 13:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:21.844 13:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.844 13:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:22.137 13:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834 00:20:22.397 [2024-07-25 13:20:32.823892] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:22.397 [2024-07-25 13:20:32.824029] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fb1360 00:20:22.397 [2024-07-25 13:20:32.824040] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:22.397 [2024-07-25 13:20:32.824207] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2156070 00:20:22.397 [2024-07-25 13:20:32.824319] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fb1360 00:20:22.397 [2024-07-25 13:20:32.824328] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1fb1360 00:20:22.397 [2024-07-25 13:20:32.824408] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.397 NewBaseBdev 00:20:22.397 13:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:22.397 13:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:22.397 13:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:22.397 13:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:22.397 13:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:22.397 13:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:22.397 13:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:22.657 13:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:22.916 [ 00:20:22.916 { 00:20:22.916 "name": "NewBaseBdev", 00:20:22.916 "aliases": [ 00:20:22.916 "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834" 00:20:22.916 ], 00:20:22.916 "product_name": "Malloc disk", 00:20:22.916 "block_size": 512, 00:20:22.916 "num_blocks": 65536, 00:20:22.916 "uuid": "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834", 00:20:22.916 "assigned_rate_limits": { 00:20:22.916 "rw_ios_per_sec": 0, 00:20:22.916 "rw_mbytes_per_sec": 0, 00:20:22.916 "r_mbytes_per_sec": 0, 00:20:22.916 "w_mbytes_per_sec": 0 00:20:22.916 }, 00:20:22.916 "claimed": true, 00:20:22.916 "claim_type": "exclusive_write", 00:20:22.916 "zoned": false, 00:20:22.916 "supported_io_types": { 00:20:22.916 "read": true, 00:20:22.916 "write": true, 00:20:22.916 "unmap": true, 00:20:22.916 "flush": true, 00:20:22.916 "reset": true, 00:20:22.916 "nvme_admin": false, 00:20:22.916 "nvme_io": false, 00:20:22.916 "nvme_io_md": false, 00:20:22.916 "write_zeroes": true, 00:20:22.916 "zcopy": true, 00:20:22.916 "get_zone_info": false, 00:20:22.916 "zone_management": false, 00:20:22.916 "zone_append": false, 00:20:22.916 "compare": false, 00:20:22.916 "compare_and_write": false, 00:20:22.916 "abort": true, 00:20:22.916 "seek_hole": false, 00:20:22.916 "seek_data": false, 00:20:22.916 "copy": true, 00:20:22.916 "nvme_iov_md": false 00:20:22.916 }, 00:20:22.916 "memory_domains": [ 00:20:22.916 { 00:20:22.916 "dma_device_id": "system", 00:20:22.916 "dma_device_type": 1 00:20:22.916 }, 00:20:22.916 { 00:20:22.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.916 "dma_device_type": 2 00:20:22.916 } 00:20:22.916 ], 00:20:22.916 "driver_specific": {} 00:20:22.916 } 00:20:22.916 ] 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.916 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.175 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:23.175 "name": "Existed_Raid", 00:20:23.175 "uuid": "08cbbda5-1764-45aa-863e-2eef040c558a", 00:20:23.175 "strip_size_kb": 64, 00:20:23.175 "state": "online", 00:20:23.175 "raid_level": "concat", 00:20:23.175 "superblock": true, 00:20:23.175 "num_base_bdevs": 4, 00:20:23.175 "num_base_bdevs_discovered": 4, 00:20:23.175 "num_base_bdevs_operational": 4, 00:20:23.175 "base_bdevs_list": [ 00:20:23.175 { 00:20:23.175 "name": "NewBaseBdev", 00:20:23.175 "uuid": "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834", 00:20:23.175 "is_configured": true, 00:20:23.175 "data_offset": 2048, 00:20:23.175 "data_size": 63488 00:20:23.175 }, 00:20:23.175 { 00:20:23.175 "name": "BaseBdev2", 00:20:23.175 "uuid": "833243da-d875-4000-a68a-8f5276c5f1f2", 00:20:23.175 "is_configured": true, 00:20:23.175 "data_offset": 2048, 00:20:23.175 "data_size": 63488 00:20:23.175 }, 00:20:23.175 { 00:20:23.175 "name": "BaseBdev3", 00:20:23.175 "uuid": "63cebcd5-7189-48b5-8b96-d3d69a351a51", 00:20:23.175 "is_configured": true, 00:20:23.175 "data_offset": 2048, 00:20:23.175 "data_size": 63488 00:20:23.175 }, 00:20:23.175 { 00:20:23.175 "name": "BaseBdev4", 00:20:23.175 "uuid": "03d32be9-755f-46e4-869d-3eb24ab5431c", 00:20:23.175 "is_configured": true, 00:20:23.175 "data_offset": 2048, 00:20:23.175 "data_size": 63488 00:20:23.175 } 00:20:23.175 ] 00:20:23.175 }' 00:20:23.175 13:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:23.176 13:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.744 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:23.744 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:23.744 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:23.744 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:23.744 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:23.744 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:23.744 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:23.744 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:24.004 [2024-07-25 13:20:34.272017] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.004 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:24.004 "name": "Existed_Raid", 00:20:24.004 "aliases": [ 00:20:24.004 "08cbbda5-1764-45aa-863e-2eef040c558a" 00:20:24.004 ], 00:20:24.004 "product_name": "Raid Volume", 00:20:24.004 "block_size": 512, 00:20:24.004 "num_blocks": 253952, 00:20:24.004 "uuid": "08cbbda5-1764-45aa-863e-2eef040c558a", 00:20:24.004 "assigned_rate_limits": { 00:20:24.004 "rw_ios_per_sec": 0, 00:20:24.004 "rw_mbytes_per_sec": 0, 00:20:24.004 "r_mbytes_per_sec": 0, 00:20:24.004 "w_mbytes_per_sec": 0 00:20:24.004 }, 00:20:24.004 "claimed": false, 00:20:24.004 "zoned": false, 00:20:24.004 "supported_io_types": { 00:20:24.004 "read": true, 00:20:24.004 "write": true, 00:20:24.004 "unmap": true, 00:20:24.004 "flush": true, 00:20:24.004 "reset": true, 00:20:24.004 "nvme_admin": false, 00:20:24.004 "nvme_io": false, 00:20:24.004 "nvme_io_md": false, 00:20:24.004 "write_zeroes": true, 00:20:24.004 "zcopy": false, 00:20:24.004 "get_zone_info": false, 00:20:24.004 "zone_management": false, 00:20:24.004 "zone_append": false, 00:20:24.004 "compare": false, 00:20:24.004 "compare_and_write": false, 00:20:24.004 "abort": false, 00:20:24.004 "seek_hole": false, 00:20:24.004 "seek_data": false, 00:20:24.004 "copy": false, 00:20:24.004 "nvme_iov_md": false 00:20:24.004 }, 00:20:24.004 "memory_domains": [ 00:20:24.004 { 00:20:24.004 "dma_device_id": "system", 00:20:24.004 "dma_device_type": 1 00:20:24.004 }, 00:20:24.004 { 00:20:24.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.004 "dma_device_type": 2 00:20:24.004 }, 00:20:24.004 { 00:20:24.004 "dma_device_id": "system", 00:20:24.004 "dma_device_type": 1 00:20:24.004 }, 00:20:24.004 { 00:20:24.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.004 "dma_device_type": 2 00:20:24.004 }, 00:20:24.004 { 00:20:24.004 "dma_device_id": "system", 00:20:24.004 "dma_device_type": 1 00:20:24.004 }, 00:20:24.004 { 00:20:24.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.004 "dma_device_type": 2 00:20:24.004 }, 00:20:24.004 { 00:20:24.004 "dma_device_id": "system", 00:20:24.004 "dma_device_type": 1 00:20:24.004 }, 00:20:24.004 { 00:20:24.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.004 "dma_device_type": 2 00:20:24.004 } 00:20:24.004 ], 00:20:24.004 "driver_specific": { 00:20:24.004 "raid": { 00:20:24.004 "uuid": "08cbbda5-1764-45aa-863e-2eef040c558a", 00:20:24.004 "strip_size_kb": 64, 00:20:24.004 "state": "online", 00:20:24.004 "raid_level": "concat", 00:20:24.004 "superblock": true, 00:20:24.004 "num_base_bdevs": 4, 00:20:24.004 "num_base_bdevs_discovered": 4, 00:20:24.004 "num_base_bdevs_operational": 4, 00:20:24.004 "base_bdevs_list": [ 00:20:24.004 { 00:20:24.004 "name": "NewBaseBdev", 00:20:24.004 "uuid": "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834", 00:20:24.004 "is_configured": true, 00:20:24.004 "data_offset": 2048, 00:20:24.004 "data_size": 63488 00:20:24.004 }, 00:20:24.004 { 00:20:24.004 "name": "BaseBdev2", 00:20:24.004 "uuid": "833243da-d875-4000-a68a-8f5276c5f1f2", 00:20:24.004 "is_configured": true, 00:20:24.004 "data_offset": 2048, 00:20:24.004 "data_size": 63488 00:20:24.004 }, 00:20:24.004 { 00:20:24.004 "name": "BaseBdev3", 00:20:24.004 "uuid": "63cebcd5-7189-48b5-8b96-d3d69a351a51", 00:20:24.004 "is_configured": true, 00:20:24.004 "data_offset": 2048, 00:20:24.004 "data_size": 63488 00:20:24.004 }, 00:20:24.004 { 00:20:24.004 "name": "BaseBdev4", 00:20:24.004 "uuid": "03d32be9-755f-46e4-869d-3eb24ab5431c", 00:20:24.004 "is_configured": true, 00:20:24.004 "data_offset": 2048, 00:20:24.004 "data_size": 63488 00:20:24.004 } 00:20:24.004 ] 00:20:24.004 } 00:20:24.004 } 00:20:24.004 }' 00:20:24.004 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:24.004 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:24.004 BaseBdev2 00:20:24.004 BaseBdev3 00:20:24.004 BaseBdev4' 00:20:24.004 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:24.004 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:24.004 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:24.263 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:24.263 "name": "NewBaseBdev", 00:20:24.263 "aliases": [ 00:20:24.263 "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834" 00:20:24.263 ], 00:20:24.263 "product_name": "Malloc disk", 00:20:24.263 "block_size": 512, 00:20:24.263 "num_blocks": 65536, 00:20:24.263 "uuid": "1fdb7bd1-9e1f-4cfb-a32f-6e0fb5168834", 00:20:24.263 "assigned_rate_limits": { 00:20:24.263 "rw_ios_per_sec": 0, 00:20:24.263 "rw_mbytes_per_sec": 0, 00:20:24.263 "r_mbytes_per_sec": 0, 00:20:24.263 "w_mbytes_per_sec": 0 00:20:24.263 }, 00:20:24.263 "claimed": true, 00:20:24.263 "claim_type": "exclusive_write", 00:20:24.263 "zoned": false, 00:20:24.263 "supported_io_types": { 00:20:24.263 "read": true, 00:20:24.263 "write": true, 00:20:24.263 "unmap": true, 00:20:24.263 "flush": true, 00:20:24.263 "reset": true, 00:20:24.263 "nvme_admin": false, 00:20:24.263 "nvme_io": false, 00:20:24.263 "nvme_io_md": false, 00:20:24.263 "write_zeroes": true, 00:20:24.263 "zcopy": true, 00:20:24.263 "get_zone_info": false, 00:20:24.263 "zone_management": false, 00:20:24.263 "zone_append": false, 00:20:24.263 "compare": false, 00:20:24.263 "compare_and_write": false, 00:20:24.263 "abort": true, 00:20:24.263 "seek_hole": false, 00:20:24.263 "seek_data": false, 00:20:24.263 "copy": true, 00:20:24.263 "nvme_iov_md": false 00:20:24.263 }, 00:20:24.263 "memory_domains": [ 00:20:24.263 { 00:20:24.263 "dma_device_id": "system", 00:20:24.263 "dma_device_type": 1 00:20:24.263 }, 00:20:24.263 { 00:20:24.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.263 "dma_device_type": 2 00:20:24.263 } 00:20:24.263 ], 00:20:24.263 "driver_specific": {} 00:20:24.263 }' 00:20:24.263 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.263 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.263 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:24.263 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:24.263 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:24.263 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:24.263 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:24.522 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:24.522 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:24.522 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:24.522 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:24.522 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:24.522 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:24.522 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:24.522 13:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:24.781 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:24.781 "name": "BaseBdev2", 00:20:24.781 "aliases": [ 00:20:24.781 "833243da-d875-4000-a68a-8f5276c5f1f2" 00:20:24.781 ], 00:20:24.781 "product_name": "Malloc disk", 00:20:24.781 "block_size": 512, 00:20:24.781 "num_blocks": 65536, 00:20:24.781 "uuid": "833243da-d875-4000-a68a-8f5276c5f1f2", 00:20:24.781 "assigned_rate_limits": { 00:20:24.781 "rw_ios_per_sec": 0, 00:20:24.781 "rw_mbytes_per_sec": 0, 00:20:24.781 "r_mbytes_per_sec": 0, 00:20:24.781 "w_mbytes_per_sec": 0 00:20:24.781 }, 00:20:24.781 "claimed": true, 00:20:24.781 "claim_type": "exclusive_write", 00:20:24.781 "zoned": false, 00:20:24.781 "supported_io_types": { 00:20:24.781 "read": true, 00:20:24.781 "write": true, 00:20:24.781 "unmap": true, 00:20:24.781 "flush": true, 00:20:24.781 "reset": true, 00:20:24.781 "nvme_admin": false, 00:20:24.781 "nvme_io": false, 00:20:24.781 "nvme_io_md": false, 00:20:24.781 "write_zeroes": true, 00:20:24.781 "zcopy": true, 00:20:24.781 "get_zone_info": false, 00:20:24.781 "zone_management": false, 00:20:24.781 "zone_append": false, 00:20:24.781 "compare": false, 00:20:24.781 "compare_and_write": false, 00:20:24.781 "abort": true, 00:20:24.781 "seek_hole": false, 00:20:24.781 "seek_data": false, 00:20:24.781 "copy": true, 00:20:24.781 "nvme_iov_md": false 00:20:24.781 }, 00:20:24.781 "memory_domains": [ 00:20:24.781 { 00:20:24.781 "dma_device_id": "system", 00:20:24.781 "dma_device_type": 1 00:20:24.781 }, 00:20:24.781 { 00:20:24.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.781 "dma_device_type": 2 00:20:24.781 } 00:20:24.781 ], 00:20:24.781 "driver_specific": {} 00:20:24.781 }' 00:20:24.781 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.781 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.781 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:24.781 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:24.781 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:24.781 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:24.781 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:25.039 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:25.039 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:25.039 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:25.039 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:25.039 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:25.039 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:25.039 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:25.039 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:25.297 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:25.297 "name": "BaseBdev3", 00:20:25.297 "aliases": [ 00:20:25.297 "63cebcd5-7189-48b5-8b96-d3d69a351a51" 00:20:25.297 ], 00:20:25.297 "product_name": "Malloc disk", 00:20:25.297 "block_size": 512, 00:20:25.297 "num_blocks": 65536, 00:20:25.297 "uuid": "63cebcd5-7189-48b5-8b96-d3d69a351a51", 00:20:25.297 "assigned_rate_limits": { 00:20:25.297 "rw_ios_per_sec": 0, 00:20:25.297 "rw_mbytes_per_sec": 0, 00:20:25.297 "r_mbytes_per_sec": 0, 00:20:25.297 "w_mbytes_per_sec": 0 00:20:25.297 }, 00:20:25.297 "claimed": true, 00:20:25.297 "claim_type": "exclusive_write", 00:20:25.297 "zoned": false, 00:20:25.297 "supported_io_types": { 00:20:25.297 "read": true, 00:20:25.297 "write": true, 00:20:25.297 "unmap": true, 00:20:25.297 "flush": true, 00:20:25.297 "reset": true, 00:20:25.297 "nvme_admin": false, 00:20:25.297 "nvme_io": false, 00:20:25.297 "nvme_io_md": false, 00:20:25.297 "write_zeroes": true, 00:20:25.297 "zcopy": true, 00:20:25.297 "get_zone_info": false, 00:20:25.297 "zone_management": false, 00:20:25.297 "zone_append": false, 00:20:25.297 "compare": false, 00:20:25.297 "compare_and_write": false, 00:20:25.297 "abort": true, 00:20:25.297 "seek_hole": false, 00:20:25.297 "seek_data": false, 00:20:25.297 "copy": true, 00:20:25.297 "nvme_iov_md": false 00:20:25.297 }, 00:20:25.297 "memory_domains": [ 00:20:25.297 { 00:20:25.297 "dma_device_id": "system", 00:20:25.297 "dma_device_type": 1 00:20:25.297 }, 00:20:25.297 { 00:20:25.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.297 "dma_device_type": 2 00:20:25.297 } 00:20:25.297 ], 00:20:25.297 "driver_specific": {} 00:20:25.297 }' 00:20:25.297 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:25.297 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:25.297 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:25.297 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:25.554 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:25.554 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:25.554 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:25.554 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:25.554 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:25.554 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:25.554 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:25.554 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:25.554 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:25.554 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:25.554 13:20:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:25.813 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:25.813 "name": "BaseBdev4", 00:20:25.813 "aliases": [ 00:20:25.813 "03d32be9-755f-46e4-869d-3eb24ab5431c" 00:20:25.813 ], 00:20:25.813 "product_name": "Malloc disk", 00:20:25.813 "block_size": 512, 00:20:25.813 "num_blocks": 65536, 00:20:25.813 "uuid": "03d32be9-755f-46e4-869d-3eb24ab5431c", 00:20:25.813 "assigned_rate_limits": { 00:20:25.813 "rw_ios_per_sec": 0, 00:20:25.813 "rw_mbytes_per_sec": 0, 00:20:25.813 "r_mbytes_per_sec": 0, 00:20:25.813 "w_mbytes_per_sec": 0 00:20:25.813 }, 00:20:25.813 "claimed": true, 00:20:25.813 "claim_type": "exclusive_write", 00:20:25.813 "zoned": false, 00:20:25.813 "supported_io_types": { 00:20:25.813 "read": true, 00:20:25.813 "write": true, 00:20:25.813 "unmap": true, 00:20:25.813 "flush": true, 00:20:25.813 "reset": true, 00:20:25.813 "nvme_admin": false, 00:20:25.813 "nvme_io": false, 00:20:25.813 "nvme_io_md": false, 00:20:25.813 "write_zeroes": true, 00:20:25.813 "zcopy": true, 00:20:25.813 "get_zone_info": false, 00:20:25.813 "zone_management": false, 00:20:25.813 "zone_append": false, 00:20:25.813 "compare": false, 00:20:25.813 "compare_and_write": false, 00:20:25.813 "abort": true, 00:20:25.813 "seek_hole": false, 00:20:25.813 "seek_data": false, 00:20:25.813 "copy": true, 00:20:25.813 "nvme_iov_md": false 00:20:25.813 }, 00:20:25.813 "memory_domains": [ 00:20:25.813 { 00:20:25.813 "dma_device_id": "system", 00:20:25.813 "dma_device_type": 1 00:20:25.813 }, 00:20:25.813 { 00:20:25.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.813 "dma_device_type": 2 00:20:25.813 } 00:20:25.813 ], 00:20:25.813 "driver_specific": {} 00:20:25.813 }' 00:20:25.813 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:25.813 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.071 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:26.071 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.071 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.071 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:26.071 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.071 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.071 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:26.071 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.071 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.071 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:26.071 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:26.330 [2024-07-25 13:20:36.766304] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:26.330 [2024-07-25 13:20:36.766327] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:26.330 [2024-07-25 13:20:36.766376] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.330 [2024-07-25 13:20:36.766429] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.330 [2024-07-25 13:20:36.766440] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fb1360 name Existed_Raid, state offline 00:20:26.330 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 923425 00:20:26.330 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 923425 ']' 00:20:26.330 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 923425 00:20:26.330 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:20:26.330 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.330 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 923425 00:20:26.588 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:26.589 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:26.589 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 923425' 00:20:26.589 killing process with pid 923425 00:20:26.589 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 923425 00:20:26.589 [2024-07-25 13:20:36.839919] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.589 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 923425 00:20:26.589 [2024-07-25 13:20:36.870571] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:26.589 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:26.589 00:20:26.589 real 0m31.795s 00:20:26.589 user 0m58.410s 00:20:26.589 sys 0m5.643s 00:20:26.589 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:26.589 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.589 ************************************ 00:20:26.589 END TEST raid_state_function_test_sb 00:20:26.589 ************************************ 00:20:26.848 13:20:37 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:20:26.848 13:20:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:26.848 13:20:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:26.848 13:20:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.848 ************************************ 00:20:26.848 START TEST raid_superblock_test 00:20:26.848 ************************************ 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=929380 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 929380 /var/tmp/spdk-raid.sock 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 929380 ']' 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:26.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.848 13:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.848 [2024-07-25 13:20:37.203685] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:20:26.848 [2024-07-25 13:20:37.203743] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929380 ] 00:20:26.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.848 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:26.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.848 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:26.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.848 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:26.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.848 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:26.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.848 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:26.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.848 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:26.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.848 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:26.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.848 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:26.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.848 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:26.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.848 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:26.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.848 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:26.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.849 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:27.108 [2024-07-25 13:20:37.336076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.108 [2024-07-25 13:20:37.418983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.108 [2024-07-25 13:20:37.486743] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.108 [2024-07-25 13:20:37.486780] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.676 13:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.676 13:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:20:27.676 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:20:27.676 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:27.676 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:20:27.676 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:20:27.676 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:27.676 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:27.676 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:27.676 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:27.676 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:27.935 malloc1 00:20:27.935 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:28.194 [2024-07-25 13:20:38.488888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:28.194 [2024-07-25 13:20:38.488935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.194 [2024-07-25 13:20:38.488954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19a42f0 00:20:28.194 [2024-07-25 13:20:38.488966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.194 [2024-07-25 13:20:38.490462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.194 [2024-07-25 13:20:38.490497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:28.194 pt1 00:20:28.194 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:28.194 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:28.194 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:20:28.194 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:20:28.194 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:28.194 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:28.194 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:28.194 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:28.194 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:28.452 malloc2 00:20:28.453 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:28.712 [2024-07-25 13:20:38.946333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:28.712 [2024-07-25 13:20:38.946373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.712 [2024-07-25 13:20:38.946389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b3bf70 00:20:28.712 [2024-07-25 13:20:38.946401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.712 [2024-07-25 13:20:38.947712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.712 [2024-07-25 13:20:38.947738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:28.712 pt2 00:20:28.712 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:28.712 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:28.712 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:20:28.712 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:20:28.712 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:28.712 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:28.712 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:28.712 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:28.712 13:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:28.712 malloc3 00:20:28.712 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:28.971 [2024-07-25 13:20:39.403516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:28.971 [2024-07-25 13:20:39.403552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.971 [2024-07-25 13:20:39.403569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b3f830 00:20:28.971 [2024-07-25 13:20:39.403581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.971 [2024-07-25 13:20:39.404842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.971 [2024-07-25 13:20:39.404867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:28.971 pt3 00:20:28.971 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:28.971 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:28.971 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:20:28.971 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:20:28.971 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:28.971 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:28.971 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:28.971 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:28.971 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:29.229 malloc4 00:20:29.229 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:29.488 [2024-07-25 13:20:39.825011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:29.488 [2024-07-25 13:20:39.825054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.488 [2024-07-25 13:20:39.825072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b40f10 00:20:29.488 [2024-07-25 13:20:39.825083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.488 [2024-07-25 13:20:39.826354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.488 [2024-07-25 13:20:39.826382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:29.488 pt4 00:20:29.488 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:29.488 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:29.488 13:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:29.747 [2024-07-25 13:20:40.053633] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:29.747 [2024-07-25 13:20:40.054756] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:29.747 [2024-07-25 13:20:40.054806] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:29.747 [2024-07-25 13:20:40.054846] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:29.747 [2024-07-25 13:20:40.054991] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b42190 00:20:29.747 [2024-07-25 13:20:40.055001] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:29.747 [2024-07-25 13:20:40.055187] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b40c30 00:20:29.747 [2024-07-25 13:20:40.055313] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b42190 00:20:29.747 [2024-07-25 13:20:40.055323] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b42190 00:20:29.747 [2024-07-25 13:20:40.055419] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.747 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:29.747 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:29.747 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:29.747 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:29.747 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:29.747 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:29.747 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.747 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.747 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.747 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.747 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.747 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.005 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.005 "name": "raid_bdev1", 00:20:30.005 "uuid": "3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3", 00:20:30.005 "strip_size_kb": 64, 00:20:30.005 "state": "online", 00:20:30.005 "raid_level": "concat", 00:20:30.005 "superblock": true, 00:20:30.005 "num_base_bdevs": 4, 00:20:30.005 "num_base_bdevs_discovered": 4, 00:20:30.005 "num_base_bdevs_operational": 4, 00:20:30.005 "base_bdevs_list": [ 00:20:30.005 { 00:20:30.005 "name": "pt1", 00:20:30.005 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:30.005 "is_configured": true, 00:20:30.005 "data_offset": 2048, 00:20:30.005 "data_size": 63488 00:20:30.005 }, 00:20:30.005 { 00:20:30.005 "name": "pt2", 00:20:30.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:30.005 "is_configured": true, 00:20:30.005 "data_offset": 2048, 00:20:30.005 "data_size": 63488 00:20:30.006 }, 00:20:30.006 { 00:20:30.006 "name": "pt3", 00:20:30.006 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:30.006 "is_configured": true, 00:20:30.006 "data_offset": 2048, 00:20:30.006 "data_size": 63488 00:20:30.006 }, 00:20:30.006 { 00:20:30.006 "name": "pt4", 00:20:30.006 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:30.006 "is_configured": true, 00:20:30.006 "data_offset": 2048, 00:20:30.006 "data_size": 63488 00:20:30.006 } 00:20:30.006 ] 00:20:30.006 }' 00:20:30.006 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.006 13:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.573 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:20:30.573 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:30.573 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:30.573 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:30.573 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:30.573 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:30.573 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:30.573 13:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:30.832 [2024-07-25 13:20:41.072571] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:30.832 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:30.832 "name": "raid_bdev1", 00:20:30.832 "aliases": [ 00:20:30.832 "3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3" 00:20:30.832 ], 00:20:30.832 "product_name": "Raid Volume", 00:20:30.832 "block_size": 512, 00:20:30.832 "num_blocks": 253952, 00:20:30.832 "uuid": "3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3", 00:20:30.832 "assigned_rate_limits": { 00:20:30.832 "rw_ios_per_sec": 0, 00:20:30.832 "rw_mbytes_per_sec": 0, 00:20:30.832 "r_mbytes_per_sec": 0, 00:20:30.832 "w_mbytes_per_sec": 0 00:20:30.832 }, 00:20:30.832 "claimed": false, 00:20:30.832 "zoned": false, 00:20:30.832 "supported_io_types": { 00:20:30.832 "read": true, 00:20:30.832 "write": true, 00:20:30.832 "unmap": true, 00:20:30.832 "flush": true, 00:20:30.832 "reset": true, 00:20:30.832 "nvme_admin": false, 00:20:30.832 "nvme_io": false, 00:20:30.832 "nvme_io_md": false, 00:20:30.832 "write_zeroes": true, 00:20:30.832 "zcopy": false, 00:20:30.832 "get_zone_info": false, 00:20:30.832 "zone_management": false, 00:20:30.832 "zone_append": false, 00:20:30.832 "compare": false, 00:20:30.832 "compare_and_write": false, 00:20:30.832 "abort": false, 00:20:30.832 "seek_hole": false, 00:20:30.832 "seek_data": false, 00:20:30.832 "copy": false, 00:20:30.832 "nvme_iov_md": false 00:20:30.832 }, 00:20:30.832 "memory_domains": [ 00:20:30.832 { 00:20:30.832 "dma_device_id": "system", 00:20:30.832 "dma_device_type": 1 00:20:30.832 }, 00:20:30.832 { 00:20:30.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.832 "dma_device_type": 2 00:20:30.832 }, 00:20:30.832 { 00:20:30.832 "dma_device_id": "system", 00:20:30.832 "dma_device_type": 1 00:20:30.832 }, 00:20:30.832 { 00:20:30.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.832 "dma_device_type": 2 00:20:30.832 }, 00:20:30.832 { 00:20:30.832 "dma_device_id": "system", 00:20:30.832 "dma_device_type": 1 00:20:30.832 }, 00:20:30.832 { 00:20:30.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.832 "dma_device_type": 2 00:20:30.832 }, 00:20:30.832 { 00:20:30.832 "dma_device_id": "system", 00:20:30.832 "dma_device_type": 1 00:20:30.832 }, 00:20:30.832 { 00:20:30.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.832 "dma_device_type": 2 00:20:30.832 } 00:20:30.832 ], 00:20:30.832 "driver_specific": { 00:20:30.832 "raid": { 00:20:30.832 "uuid": "3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3", 00:20:30.832 "strip_size_kb": 64, 00:20:30.832 "state": "online", 00:20:30.832 "raid_level": "concat", 00:20:30.832 "superblock": true, 00:20:30.832 "num_base_bdevs": 4, 00:20:30.832 "num_base_bdevs_discovered": 4, 00:20:30.832 "num_base_bdevs_operational": 4, 00:20:30.832 "base_bdevs_list": [ 00:20:30.832 { 00:20:30.832 "name": "pt1", 00:20:30.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:30.832 "is_configured": true, 00:20:30.832 "data_offset": 2048, 00:20:30.832 "data_size": 63488 00:20:30.832 }, 00:20:30.832 { 00:20:30.832 "name": "pt2", 00:20:30.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:30.832 "is_configured": true, 00:20:30.832 "data_offset": 2048, 00:20:30.832 "data_size": 63488 00:20:30.832 }, 00:20:30.832 { 00:20:30.832 "name": "pt3", 00:20:30.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:30.832 "is_configured": true, 00:20:30.832 "data_offset": 2048, 00:20:30.832 "data_size": 63488 00:20:30.832 }, 00:20:30.832 { 00:20:30.832 "name": "pt4", 00:20:30.832 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:30.832 "is_configured": true, 00:20:30.832 "data_offset": 2048, 00:20:30.832 "data_size": 63488 00:20:30.832 } 00:20:30.832 ] 00:20:30.832 } 00:20:30.832 } 00:20:30.832 }' 00:20:30.832 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:30.832 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:30.832 pt2 00:20:30.832 pt3 00:20:30.832 pt4' 00:20:30.832 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:30.832 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:30.832 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.092 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:31.092 "name": "pt1", 00:20:31.092 "aliases": [ 00:20:31.092 "00000000-0000-0000-0000-000000000001" 00:20:31.092 ], 00:20:31.092 "product_name": "passthru", 00:20:31.092 "block_size": 512, 00:20:31.092 "num_blocks": 65536, 00:20:31.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:31.092 "assigned_rate_limits": { 00:20:31.092 "rw_ios_per_sec": 0, 00:20:31.092 "rw_mbytes_per_sec": 0, 00:20:31.092 "r_mbytes_per_sec": 0, 00:20:31.092 "w_mbytes_per_sec": 0 00:20:31.092 }, 00:20:31.092 "claimed": true, 00:20:31.092 "claim_type": "exclusive_write", 00:20:31.092 "zoned": false, 00:20:31.092 "supported_io_types": { 00:20:31.092 "read": true, 00:20:31.092 "write": true, 00:20:31.092 "unmap": true, 00:20:31.092 "flush": true, 00:20:31.092 "reset": true, 00:20:31.092 "nvme_admin": false, 00:20:31.092 "nvme_io": false, 00:20:31.092 "nvme_io_md": false, 00:20:31.092 "write_zeroes": true, 00:20:31.092 "zcopy": true, 00:20:31.092 "get_zone_info": false, 00:20:31.092 "zone_management": false, 00:20:31.092 "zone_append": false, 00:20:31.092 "compare": false, 00:20:31.092 "compare_and_write": false, 00:20:31.092 "abort": true, 00:20:31.092 "seek_hole": false, 00:20:31.092 "seek_data": false, 00:20:31.092 "copy": true, 00:20:31.092 "nvme_iov_md": false 00:20:31.092 }, 00:20:31.092 "memory_domains": [ 00:20:31.092 { 00:20:31.092 "dma_device_id": "system", 00:20:31.092 "dma_device_type": 1 00:20:31.092 }, 00:20:31.092 { 00:20:31.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.092 "dma_device_type": 2 00:20:31.092 } 00:20:31.092 ], 00:20:31.092 "driver_specific": { 00:20:31.092 "passthru": { 00:20:31.092 "name": "pt1", 00:20:31.092 "base_bdev_name": "malloc1" 00:20:31.092 } 00:20:31.092 } 00:20:31.092 }' 00:20:31.092 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.092 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.092 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:31.092 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.092 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.092 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:31.092 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.092 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.351 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:31.351 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.351 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.351 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:31.351 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.351 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:31.351 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.611 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:31.611 "name": "pt2", 00:20:31.611 "aliases": [ 00:20:31.611 "00000000-0000-0000-0000-000000000002" 00:20:31.611 ], 00:20:31.611 "product_name": "passthru", 00:20:31.611 "block_size": 512, 00:20:31.611 "num_blocks": 65536, 00:20:31.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:31.611 "assigned_rate_limits": { 00:20:31.611 "rw_ios_per_sec": 0, 00:20:31.611 "rw_mbytes_per_sec": 0, 00:20:31.611 "r_mbytes_per_sec": 0, 00:20:31.611 "w_mbytes_per_sec": 0 00:20:31.611 }, 00:20:31.611 "claimed": true, 00:20:31.611 "claim_type": "exclusive_write", 00:20:31.611 "zoned": false, 00:20:31.611 "supported_io_types": { 00:20:31.611 "read": true, 00:20:31.611 "write": true, 00:20:31.611 "unmap": true, 00:20:31.611 "flush": true, 00:20:31.611 "reset": true, 00:20:31.611 "nvme_admin": false, 00:20:31.611 "nvme_io": false, 00:20:31.611 "nvme_io_md": false, 00:20:31.611 "write_zeroes": true, 00:20:31.611 "zcopy": true, 00:20:31.611 "get_zone_info": false, 00:20:31.611 "zone_management": false, 00:20:31.611 "zone_append": false, 00:20:31.611 "compare": false, 00:20:31.611 "compare_and_write": false, 00:20:31.611 "abort": true, 00:20:31.611 "seek_hole": false, 00:20:31.611 "seek_data": false, 00:20:31.611 "copy": true, 00:20:31.611 "nvme_iov_md": false 00:20:31.611 }, 00:20:31.611 "memory_domains": [ 00:20:31.611 { 00:20:31.611 "dma_device_id": "system", 00:20:31.611 "dma_device_type": 1 00:20:31.611 }, 00:20:31.611 { 00:20:31.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.611 "dma_device_type": 2 00:20:31.611 } 00:20:31.611 ], 00:20:31.611 "driver_specific": { 00:20:31.611 "passthru": { 00:20:31.611 "name": "pt2", 00:20:31.611 "base_bdev_name": "malloc2" 00:20:31.611 } 00:20:31.611 } 00:20:31.611 }' 00:20:31.611 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.611 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.611 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:31.611 13:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.611 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.611 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:31.611 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.870 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.870 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:31.870 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.870 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.870 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:31.870 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:31.870 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:31.870 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:32.128 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:32.128 "name": "pt3", 00:20:32.128 "aliases": [ 00:20:32.128 "00000000-0000-0000-0000-000000000003" 00:20:32.128 ], 00:20:32.128 "product_name": "passthru", 00:20:32.128 "block_size": 512, 00:20:32.128 "num_blocks": 65536, 00:20:32.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:32.128 "assigned_rate_limits": { 00:20:32.128 "rw_ios_per_sec": 0, 00:20:32.128 "rw_mbytes_per_sec": 0, 00:20:32.128 "r_mbytes_per_sec": 0, 00:20:32.128 "w_mbytes_per_sec": 0 00:20:32.128 }, 00:20:32.128 "claimed": true, 00:20:32.128 "claim_type": "exclusive_write", 00:20:32.128 "zoned": false, 00:20:32.128 "supported_io_types": { 00:20:32.128 "read": true, 00:20:32.128 "write": true, 00:20:32.128 "unmap": true, 00:20:32.128 "flush": true, 00:20:32.128 "reset": true, 00:20:32.128 "nvme_admin": false, 00:20:32.128 "nvme_io": false, 00:20:32.128 "nvme_io_md": false, 00:20:32.128 "write_zeroes": true, 00:20:32.128 "zcopy": true, 00:20:32.128 "get_zone_info": false, 00:20:32.128 "zone_management": false, 00:20:32.128 "zone_append": false, 00:20:32.128 "compare": false, 00:20:32.128 "compare_and_write": false, 00:20:32.128 "abort": true, 00:20:32.128 "seek_hole": false, 00:20:32.128 "seek_data": false, 00:20:32.128 "copy": true, 00:20:32.128 "nvme_iov_md": false 00:20:32.128 }, 00:20:32.128 "memory_domains": [ 00:20:32.128 { 00:20:32.128 "dma_device_id": "system", 00:20:32.128 "dma_device_type": 1 00:20:32.128 }, 00:20:32.128 { 00:20:32.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.128 "dma_device_type": 2 00:20:32.128 } 00:20:32.128 ], 00:20:32.128 "driver_specific": { 00:20:32.128 "passthru": { 00:20:32.128 "name": "pt3", 00:20:32.128 "base_bdev_name": "malloc3" 00:20:32.128 } 00:20:32.128 } 00:20:32.128 }' 00:20:32.128 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.128 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.128 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:32.128 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.128 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.387 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:32.387 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.387 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.387 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.387 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.387 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.387 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:32.387 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:32.387 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:32.387 13:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:32.645 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:32.645 "name": "pt4", 00:20:32.645 "aliases": [ 00:20:32.645 "00000000-0000-0000-0000-000000000004" 00:20:32.645 ], 00:20:32.645 "product_name": "passthru", 00:20:32.645 "block_size": 512, 00:20:32.645 "num_blocks": 65536, 00:20:32.645 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:32.645 "assigned_rate_limits": { 00:20:32.645 "rw_ios_per_sec": 0, 00:20:32.645 "rw_mbytes_per_sec": 0, 00:20:32.645 "r_mbytes_per_sec": 0, 00:20:32.645 "w_mbytes_per_sec": 0 00:20:32.645 }, 00:20:32.645 "claimed": true, 00:20:32.645 "claim_type": "exclusive_write", 00:20:32.645 "zoned": false, 00:20:32.645 "supported_io_types": { 00:20:32.645 "read": true, 00:20:32.645 "write": true, 00:20:32.645 "unmap": true, 00:20:32.645 "flush": true, 00:20:32.645 "reset": true, 00:20:32.645 "nvme_admin": false, 00:20:32.645 "nvme_io": false, 00:20:32.645 "nvme_io_md": false, 00:20:32.645 "write_zeroes": true, 00:20:32.645 "zcopy": true, 00:20:32.645 "get_zone_info": false, 00:20:32.645 "zone_management": false, 00:20:32.645 "zone_append": false, 00:20:32.645 "compare": false, 00:20:32.645 "compare_and_write": false, 00:20:32.645 "abort": true, 00:20:32.645 "seek_hole": false, 00:20:32.645 "seek_data": false, 00:20:32.645 "copy": true, 00:20:32.645 "nvme_iov_md": false 00:20:32.645 }, 00:20:32.645 "memory_domains": [ 00:20:32.645 { 00:20:32.645 "dma_device_id": "system", 00:20:32.645 "dma_device_type": 1 00:20:32.645 }, 00:20:32.645 { 00:20:32.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.645 "dma_device_type": 2 00:20:32.645 } 00:20:32.645 ], 00:20:32.645 "driver_specific": { 00:20:32.645 "passthru": { 00:20:32.645 "name": "pt4", 00:20:32.645 "base_bdev_name": "malloc4" 00:20:32.645 } 00:20:32.645 } 00:20:32.645 }' 00:20:32.645 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.645 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:32.645 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:32.645 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.903 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:32.904 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:32.904 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.904 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:32.904 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:32.904 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.904 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:32.904 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:32.904 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:32.904 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:20:33.161 [2024-07-25 13:20:43.575167] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:33.161 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3 00:20:33.161 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3 ']' 00:20:33.161 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:33.420 [2024-07-25 13:20:43.803490] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:33.420 [2024-07-25 13:20:43.803510] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.420 [2024-07-25 13:20:43.803558] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.420 [2024-07-25 13:20:43.803612] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.420 [2024-07-25 13:20:43.803623] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b42190 name raid_bdev1, state offline 00:20:33.420 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.420 13:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:20:33.679 13:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:20:33.679 13:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:20:33.679 13:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:33.679 13:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:33.938 13:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:33.938 13:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:34.197 13:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:34.197 13:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:34.483 13:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:34.483 13:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:34.791 13:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:34.791 13:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:34.791 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:35.051 [2024-07-25 13:20:45.407650] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:35.051 [2024-07-25 13:20:45.408918] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:35.051 [2024-07-25 13:20:45.408960] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:35.051 [2024-07-25 13:20:45.408991] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:35.051 [2024-07-25 13:20:45.409031] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:35.051 [2024-07-25 13:20:45.409068] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:35.051 [2024-07-25 13:20:45.409089] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:35.051 [2024-07-25 13:20:45.409110] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:35.051 [2024-07-25 13:20:45.409125] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:35.051 [2024-07-25 13:20:45.409134] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b408d0 name raid_bdev1, state configuring 00:20:35.051 request: 00:20:35.051 { 00:20:35.051 "name": "raid_bdev1", 00:20:35.051 "raid_level": "concat", 00:20:35.051 "base_bdevs": [ 00:20:35.051 "malloc1", 00:20:35.051 "malloc2", 00:20:35.051 "malloc3", 00:20:35.051 "malloc4" 00:20:35.051 ], 00:20:35.051 "strip_size_kb": 64, 00:20:35.051 "superblock": false, 00:20:35.051 "method": "bdev_raid_create", 00:20:35.051 "req_id": 1 00:20:35.051 } 00:20:35.051 Got JSON-RPC error response 00:20:35.051 response: 00:20:35.051 { 00:20:35.051 "code": -17, 00:20:35.051 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:35.051 } 00:20:35.051 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:20:35.051 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:35.051 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:35.051 13:20:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:35.051 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.051 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:20:35.311 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:20:35.311 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:20:35.311 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:35.570 [2024-07-25 13:20:45.864806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:35.570 [2024-07-25 13:20:45.864855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.570 [2024-07-25 13:20:45.864874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b43380 00:20:35.570 [2024-07-25 13:20:45.864885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.570 [2024-07-25 13:20:45.866402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.570 [2024-07-25 13:20:45.866431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:35.570 [2024-07-25 13:20:45.866495] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:35.570 [2024-07-25 13:20:45.866520] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:35.570 pt1 00:20:35.570 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:35.570 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:35.570 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:35.570 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:35.570 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:35.570 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:35.570 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.570 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.570 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.570 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.570 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.570 13:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.830 13:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.830 "name": "raid_bdev1", 00:20:35.830 "uuid": "3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3", 00:20:35.830 "strip_size_kb": 64, 00:20:35.830 "state": "configuring", 00:20:35.830 "raid_level": "concat", 00:20:35.830 "superblock": true, 00:20:35.830 "num_base_bdevs": 4, 00:20:35.830 "num_base_bdevs_discovered": 1, 00:20:35.830 "num_base_bdevs_operational": 4, 00:20:35.830 "base_bdevs_list": [ 00:20:35.830 { 00:20:35.830 "name": "pt1", 00:20:35.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:35.830 "is_configured": true, 00:20:35.830 "data_offset": 2048, 00:20:35.830 "data_size": 63488 00:20:35.830 }, 00:20:35.830 { 00:20:35.830 "name": null, 00:20:35.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:35.830 "is_configured": false, 00:20:35.830 "data_offset": 2048, 00:20:35.830 "data_size": 63488 00:20:35.830 }, 00:20:35.830 { 00:20:35.830 "name": null, 00:20:35.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:35.830 "is_configured": false, 00:20:35.830 "data_offset": 2048, 00:20:35.830 "data_size": 63488 00:20:35.830 }, 00:20:35.830 { 00:20:35.830 "name": null, 00:20:35.830 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:35.830 "is_configured": false, 00:20:35.830 "data_offset": 2048, 00:20:35.830 "data_size": 63488 00:20:35.830 } 00:20:35.830 ] 00:20:35.830 }' 00:20:35.830 13:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.830 13:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.399 13:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:20:36.399 13:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:36.399 [2024-07-25 13:20:46.875487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:36.399 [2024-07-25 13:20:46.875539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.399 [2024-07-25 13:20:46.875559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b408d0 00:20:36.399 [2024-07-25 13:20:46.875570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.399 [2024-07-25 13:20:46.875903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.399 [2024-07-25 13:20:46.875922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:36.399 [2024-07-25 13:20:46.875980] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:36.399 [2024-07-25 13:20:46.875998] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:36.399 pt2 00:20:36.658 13:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:36.658 [2024-07-25 13:20:47.100084] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:36.658 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:36.658 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:36.658 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:36.658 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:36.658 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:36.658 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:36.658 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:36.658 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:36.658 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:36.658 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:36.658 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.658 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.918 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:36.918 "name": "raid_bdev1", 00:20:36.918 "uuid": "3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3", 00:20:36.918 "strip_size_kb": 64, 00:20:36.918 "state": "configuring", 00:20:36.918 "raid_level": "concat", 00:20:36.918 "superblock": true, 00:20:36.918 "num_base_bdevs": 4, 00:20:36.918 "num_base_bdevs_discovered": 1, 00:20:36.918 "num_base_bdevs_operational": 4, 00:20:36.918 "base_bdevs_list": [ 00:20:36.918 { 00:20:36.918 "name": "pt1", 00:20:36.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:36.918 "is_configured": true, 00:20:36.918 "data_offset": 2048, 00:20:36.918 "data_size": 63488 00:20:36.918 }, 00:20:36.918 { 00:20:36.918 "name": null, 00:20:36.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.918 "is_configured": false, 00:20:36.918 "data_offset": 2048, 00:20:36.918 "data_size": 63488 00:20:36.918 }, 00:20:36.918 { 00:20:36.918 "name": null, 00:20:36.918 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:36.918 "is_configured": false, 00:20:36.918 "data_offset": 2048, 00:20:36.918 "data_size": 63488 00:20:36.918 }, 00:20:36.918 { 00:20:36.918 "name": null, 00:20:36.918 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:36.918 "is_configured": false, 00:20:36.918 "data_offset": 2048, 00:20:36.918 "data_size": 63488 00:20:36.918 } 00:20:36.918 ] 00:20:36.918 }' 00:20:36.918 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:36.918 13:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.486 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:20:37.486 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:37.487 13:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:37.746 [2024-07-25 13:20:48.142840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:37.746 [2024-07-25 13:20:48.142892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.746 [2024-07-25 13:20:48.142910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b3dea0 00:20:37.746 [2024-07-25 13:20:48.142922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.746 [2024-07-25 13:20:48.143255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.746 [2024-07-25 13:20:48.143280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:37.746 [2024-07-25 13:20:48.143342] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:37.746 [2024-07-25 13:20:48.143360] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:37.746 pt2 00:20:37.746 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:37.746 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:37.746 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:38.005 [2024-07-25 13:20:48.371444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:38.005 [2024-07-25 13:20:48.371480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.005 [2024-07-25 13:20:48.371498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19a4520 00:20:38.005 [2024-07-25 13:20:48.371509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.005 [2024-07-25 13:20:48.371778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.005 [2024-07-25 13:20:48.371794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:38.005 [2024-07-25 13:20:48.371844] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:38.005 [2024-07-25 13:20:48.371860] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:38.005 pt3 00:20:38.005 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:38.005 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:38.005 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:38.265 [2024-07-25 13:20:48.592016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:38.265 [2024-07-25 13:20:48.592050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.265 [2024-07-25 13:20:48.592065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b42190 00:20:38.265 [2024-07-25 13:20:48.592077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.265 [2024-07-25 13:20:48.592342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.265 [2024-07-25 13:20:48.592359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:38.265 [2024-07-25 13:20:48.592406] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:38.265 [2024-07-25 13:20:48.592423] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:38.265 [2024-07-25 13:20:48.592531] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b42ef0 00:20:38.265 [2024-07-25 13:20:48.592541] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:38.265 [2024-07-25 13:20:48.592690] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b48290 00:20:38.265 [2024-07-25 13:20:48.592806] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b42ef0 00:20:38.265 [2024-07-25 13:20:48.592815] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b42ef0 00:20:38.265 [2024-07-25 13:20:48.592899] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.265 pt4 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.265 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.525 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.525 "name": "raid_bdev1", 00:20:38.525 "uuid": "3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3", 00:20:38.525 "strip_size_kb": 64, 00:20:38.525 "state": "online", 00:20:38.525 "raid_level": "concat", 00:20:38.525 "superblock": true, 00:20:38.525 "num_base_bdevs": 4, 00:20:38.525 "num_base_bdevs_discovered": 4, 00:20:38.525 "num_base_bdevs_operational": 4, 00:20:38.525 "base_bdevs_list": [ 00:20:38.525 { 00:20:38.525 "name": "pt1", 00:20:38.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:38.525 "is_configured": true, 00:20:38.525 "data_offset": 2048, 00:20:38.525 "data_size": 63488 00:20:38.525 }, 00:20:38.525 { 00:20:38.525 "name": "pt2", 00:20:38.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.525 "is_configured": true, 00:20:38.525 "data_offset": 2048, 00:20:38.525 "data_size": 63488 00:20:38.525 }, 00:20:38.525 { 00:20:38.525 "name": "pt3", 00:20:38.525 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:38.525 "is_configured": true, 00:20:38.525 "data_offset": 2048, 00:20:38.525 "data_size": 63488 00:20:38.525 }, 00:20:38.525 { 00:20:38.525 "name": "pt4", 00:20:38.525 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:38.525 "is_configured": true, 00:20:38.525 "data_offset": 2048, 00:20:38.525 "data_size": 63488 00:20:38.525 } 00:20:38.525 ] 00:20:38.525 }' 00:20:38.525 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.525 13:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.093 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:20:39.093 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:39.093 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:39.094 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:39.094 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:39.094 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:39.094 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:39.094 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:39.353 [2024-07-25 13:20:49.619013] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:39.353 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:39.353 "name": "raid_bdev1", 00:20:39.353 "aliases": [ 00:20:39.353 "3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3" 00:20:39.353 ], 00:20:39.353 "product_name": "Raid Volume", 00:20:39.353 "block_size": 512, 00:20:39.353 "num_blocks": 253952, 00:20:39.353 "uuid": "3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3", 00:20:39.353 "assigned_rate_limits": { 00:20:39.353 "rw_ios_per_sec": 0, 00:20:39.353 "rw_mbytes_per_sec": 0, 00:20:39.353 "r_mbytes_per_sec": 0, 00:20:39.353 "w_mbytes_per_sec": 0 00:20:39.353 }, 00:20:39.353 "claimed": false, 00:20:39.353 "zoned": false, 00:20:39.353 "supported_io_types": { 00:20:39.353 "read": true, 00:20:39.353 "write": true, 00:20:39.353 "unmap": true, 00:20:39.353 "flush": true, 00:20:39.353 "reset": true, 00:20:39.353 "nvme_admin": false, 00:20:39.353 "nvme_io": false, 00:20:39.353 "nvme_io_md": false, 00:20:39.353 "write_zeroes": true, 00:20:39.353 "zcopy": false, 00:20:39.353 "get_zone_info": false, 00:20:39.353 "zone_management": false, 00:20:39.353 "zone_append": false, 00:20:39.353 "compare": false, 00:20:39.353 "compare_and_write": false, 00:20:39.353 "abort": false, 00:20:39.353 "seek_hole": false, 00:20:39.353 "seek_data": false, 00:20:39.353 "copy": false, 00:20:39.353 "nvme_iov_md": false 00:20:39.353 }, 00:20:39.353 "memory_domains": [ 00:20:39.353 { 00:20:39.353 "dma_device_id": "system", 00:20:39.353 "dma_device_type": 1 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.353 "dma_device_type": 2 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "dma_device_id": "system", 00:20:39.353 "dma_device_type": 1 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.353 "dma_device_type": 2 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "dma_device_id": "system", 00:20:39.353 "dma_device_type": 1 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.353 "dma_device_type": 2 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "dma_device_id": "system", 00:20:39.353 "dma_device_type": 1 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.353 "dma_device_type": 2 00:20:39.353 } 00:20:39.353 ], 00:20:39.353 "driver_specific": { 00:20:39.353 "raid": { 00:20:39.353 "uuid": "3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3", 00:20:39.353 "strip_size_kb": 64, 00:20:39.353 "state": "online", 00:20:39.353 "raid_level": "concat", 00:20:39.353 "superblock": true, 00:20:39.353 "num_base_bdevs": 4, 00:20:39.353 "num_base_bdevs_discovered": 4, 00:20:39.353 "num_base_bdevs_operational": 4, 00:20:39.353 "base_bdevs_list": [ 00:20:39.353 { 00:20:39.353 "name": "pt1", 00:20:39.353 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:39.353 "is_configured": true, 00:20:39.353 "data_offset": 2048, 00:20:39.353 "data_size": 63488 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "name": "pt2", 00:20:39.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:39.353 "is_configured": true, 00:20:39.353 "data_offset": 2048, 00:20:39.353 "data_size": 63488 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "name": "pt3", 00:20:39.353 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:39.353 "is_configured": true, 00:20:39.353 "data_offset": 2048, 00:20:39.353 "data_size": 63488 00:20:39.353 }, 00:20:39.353 { 00:20:39.353 "name": "pt4", 00:20:39.353 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:39.353 "is_configured": true, 00:20:39.353 "data_offset": 2048, 00:20:39.353 "data_size": 63488 00:20:39.353 } 00:20:39.353 ] 00:20:39.353 } 00:20:39.353 } 00:20:39.353 }' 00:20:39.353 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:39.353 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:39.353 pt2 00:20:39.353 pt3 00:20:39.353 pt4' 00:20:39.353 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:39.353 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:39.353 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:39.612 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:39.612 "name": "pt1", 00:20:39.612 "aliases": [ 00:20:39.612 "00000000-0000-0000-0000-000000000001" 00:20:39.612 ], 00:20:39.612 "product_name": "passthru", 00:20:39.612 "block_size": 512, 00:20:39.612 "num_blocks": 65536, 00:20:39.612 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:39.612 "assigned_rate_limits": { 00:20:39.612 "rw_ios_per_sec": 0, 00:20:39.612 "rw_mbytes_per_sec": 0, 00:20:39.612 "r_mbytes_per_sec": 0, 00:20:39.612 "w_mbytes_per_sec": 0 00:20:39.612 }, 00:20:39.612 "claimed": true, 00:20:39.612 "claim_type": "exclusive_write", 00:20:39.612 "zoned": false, 00:20:39.612 "supported_io_types": { 00:20:39.612 "read": true, 00:20:39.612 "write": true, 00:20:39.612 "unmap": true, 00:20:39.612 "flush": true, 00:20:39.612 "reset": true, 00:20:39.612 "nvme_admin": false, 00:20:39.612 "nvme_io": false, 00:20:39.612 "nvme_io_md": false, 00:20:39.612 "write_zeroes": true, 00:20:39.612 "zcopy": true, 00:20:39.612 "get_zone_info": false, 00:20:39.612 "zone_management": false, 00:20:39.612 "zone_append": false, 00:20:39.612 "compare": false, 00:20:39.612 "compare_and_write": false, 00:20:39.612 "abort": true, 00:20:39.612 "seek_hole": false, 00:20:39.612 "seek_data": false, 00:20:39.612 "copy": true, 00:20:39.612 "nvme_iov_md": false 00:20:39.612 }, 00:20:39.612 "memory_domains": [ 00:20:39.612 { 00:20:39.612 "dma_device_id": "system", 00:20:39.612 "dma_device_type": 1 00:20:39.612 }, 00:20:39.612 { 00:20:39.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.612 "dma_device_type": 2 00:20:39.612 } 00:20:39.612 ], 00:20:39.612 "driver_specific": { 00:20:39.612 "passthru": { 00:20:39.612 "name": "pt1", 00:20:39.612 "base_bdev_name": "malloc1" 00:20:39.612 } 00:20:39.612 } 00:20:39.612 }' 00:20:39.612 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.612 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.612 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:39.612 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.612 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.612 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:39.612 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.871 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.871 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:39.871 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.871 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.871 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:39.871 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:39.871 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:39.871 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:40.131 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.131 "name": "pt2", 00:20:40.131 "aliases": [ 00:20:40.131 "00000000-0000-0000-0000-000000000002" 00:20:40.131 ], 00:20:40.131 "product_name": "passthru", 00:20:40.131 "block_size": 512, 00:20:40.131 "num_blocks": 65536, 00:20:40.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.131 "assigned_rate_limits": { 00:20:40.131 "rw_ios_per_sec": 0, 00:20:40.131 "rw_mbytes_per_sec": 0, 00:20:40.131 "r_mbytes_per_sec": 0, 00:20:40.131 "w_mbytes_per_sec": 0 00:20:40.131 }, 00:20:40.131 "claimed": true, 00:20:40.131 "claim_type": "exclusive_write", 00:20:40.131 "zoned": false, 00:20:40.131 "supported_io_types": { 00:20:40.131 "read": true, 00:20:40.131 "write": true, 00:20:40.131 "unmap": true, 00:20:40.131 "flush": true, 00:20:40.131 "reset": true, 00:20:40.131 "nvme_admin": false, 00:20:40.131 "nvme_io": false, 00:20:40.131 "nvme_io_md": false, 00:20:40.131 "write_zeroes": true, 00:20:40.131 "zcopy": true, 00:20:40.131 "get_zone_info": false, 00:20:40.131 "zone_management": false, 00:20:40.131 "zone_append": false, 00:20:40.131 "compare": false, 00:20:40.131 "compare_and_write": false, 00:20:40.131 "abort": true, 00:20:40.131 "seek_hole": false, 00:20:40.131 "seek_data": false, 00:20:40.131 "copy": true, 00:20:40.131 "nvme_iov_md": false 00:20:40.131 }, 00:20:40.131 "memory_domains": [ 00:20:40.131 { 00:20:40.131 "dma_device_id": "system", 00:20:40.131 "dma_device_type": 1 00:20:40.131 }, 00:20:40.131 { 00:20:40.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.131 "dma_device_type": 2 00:20:40.131 } 00:20:40.131 ], 00:20:40.131 "driver_specific": { 00:20:40.131 "passthru": { 00:20:40.131 "name": "pt2", 00:20:40.131 "base_bdev_name": "malloc2" 00:20:40.131 } 00:20:40.131 } 00:20:40.131 }' 00:20:40.131 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.131 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.131 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.131 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.390 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.390 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:40.390 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.390 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.390 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:40.390 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.390 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.390 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:40.390 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:40.390 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:40.390 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:40.649 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.649 "name": "pt3", 00:20:40.649 "aliases": [ 00:20:40.649 "00000000-0000-0000-0000-000000000003" 00:20:40.649 ], 00:20:40.649 "product_name": "passthru", 00:20:40.649 "block_size": 512, 00:20:40.649 "num_blocks": 65536, 00:20:40.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:40.649 "assigned_rate_limits": { 00:20:40.649 "rw_ios_per_sec": 0, 00:20:40.649 "rw_mbytes_per_sec": 0, 00:20:40.649 "r_mbytes_per_sec": 0, 00:20:40.649 "w_mbytes_per_sec": 0 00:20:40.649 }, 00:20:40.649 "claimed": true, 00:20:40.649 "claim_type": "exclusive_write", 00:20:40.649 "zoned": false, 00:20:40.649 "supported_io_types": { 00:20:40.649 "read": true, 00:20:40.649 "write": true, 00:20:40.649 "unmap": true, 00:20:40.649 "flush": true, 00:20:40.649 "reset": true, 00:20:40.649 "nvme_admin": false, 00:20:40.649 "nvme_io": false, 00:20:40.649 "nvme_io_md": false, 00:20:40.649 "write_zeroes": true, 00:20:40.649 "zcopy": true, 00:20:40.649 "get_zone_info": false, 00:20:40.649 "zone_management": false, 00:20:40.649 "zone_append": false, 00:20:40.650 "compare": false, 00:20:40.650 "compare_and_write": false, 00:20:40.650 "abort": true, 00:20:40.650 "seek_hole": false, 00:20:40.650 "seek_data": false, 00:20:40.650 "copy": true, 00:20:40.650 "nvme_iov_md": false 00:20:40.650 }, 00:20:40.650 "memory_domains": [ 00:20:40.650 { 00:20:40.650 "dma_device_id": "system", 00:20:40.650 "dma_device_type": 1 00:20:40.650 }, 00:20:40.650 { 00:20:40.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.650 "dma_device_type": 2 00:20:40.650 } 00:20:40.650 ], 00:20:40.650 "driver_specific": { 00:20:40.650 "passthru": { 00:20:40.650 "name": "pt3", 00:20:40.650 "base_bdev_name": "malloc3" 00:20:40.650 } 00:20:40.650 } 00:20:40.650 }' 00:20:40.650 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.650 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.650 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.650 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.909 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.909 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:40.909 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.909 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.909 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:40.909 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.909 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.909 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:40.909 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:40.909 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:40.909 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:41.168 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:41.168 "name": "pt4", 00:20:41.168 "aliases": [ 00:20:41.168 "00000000-0000-0000-0000-000000000004" 00:20:41.168 ], 00:20:41.168 "product_name": "passthru", 00:20:41.168 "block_size": 512, 00:20:41.168 "num_blocks": 65536, 00:20:41.168 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:41.168 "assigned_rate_limits": { 00:20:41.168 "rw_ios_per_sec": 0, 00:20:41.168 "rw_mbytes_per_sec": 0, 00:20:41.168 "r_mbytes_per_sec": 0, 00:20:41.168 "w_mbytes_per_sec": 0 00:20:41.168 }, 00:20:41.168 "claimed": true, 00:20:41.168 "claim_type": "exclusive_write", 00:20:41.168 "zoned": false, 00:20:41.168 "supported_io_types": { 00:20:41.168 "read": true, 00:20:41.168 "write": true, 00:20:41.168 "unmap": true, 00:20:41.168 "flush": true, 00:20:41.168 "reset": true, 00:20:41.168 "nvme_admin": false, 00:20:41.168 "nvme_io": false, 00:20:41.168 "nvme_io_md": false, 00:20:41.168 "write_zeroes": true, 00:20:41.168 "zcopy": true, 00:20:41.168 "get_zone_info": false, 00:20:41.168 "zone_management": false, 00:20:41.168 "zone_append": false, 00:20:41.168 "compare": false, 00:20:41.168 "compare_and_write": false, 00:20:41.168 "abort": true, 00:20:41.168 "seek_hole": false, 00:20:41.168 "seek_data": false, 00:20:41.168 "copy": true, 00:20:41.168 "nvme_iov_md": false 00:20:41.168 }, 00:20:41.168 "memory_domains": [ 00:20:41.168 { 00:20:41.168 "dma_device_id": "system", 00:20:41.168 "dma_device_type": 1 00:20:41.168 }, 00:20:41.168 { 00:20:41.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.168 "dma_device_type": 2 00:20:41.168 } 00:20:41.168 ], 00:20:41.168 "driver_specific": { 00:20:41.168 "passthru": { 00:20:41.168 "name": "pt4", 00:20:41.168 "base_bdev_name": "malloc4" 00:20:41.168 } 00:20:41.168 } 00:20:41.168 }' 00:20:41.168 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:41.168 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:41.168 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:41.168 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.427 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.427 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:41.427 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.427 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.427 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:41.427 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.427 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.427 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:41.427 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:41.427 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:20:41.686 [2024-07-25 13:20:52.105705] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.686 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3 '!=' 3cb48dc7-86b5-4e09-89d3-e184d3a6e3c3 ']' 00:20:41.686 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:20:41.686 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:41.686 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:41.686 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 929380 00:20:41.686 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 929380 ']' 00:20:41.687 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 929380 00:20:41.687 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:20:41.687 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:41.687 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 929380 00:20:41.946 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:41.946 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:41.946 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 929380' 00:20:41.946 killing process with pid 929380 00:20:41.946 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 929380 00:20:41.946 [2024-07-25 13:20:52.187556] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:41.946 [2024-07-25 13:20:52.187612] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.946 [2024-07-25 13:20:52.187670] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.946 [2024-07-25 13:20:52.187681] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b42ef0 name raid_bdev1, state offline 00:20:41.946 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 929380 00:20:41.946 [2024-07-25 13:20:52.218573] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:41.946 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:20:41.946 00:20:41.946 real 0m15.268s 00:20:41.946 user 0m27.475s 00:20:41.946 sys 0m2.813s 00:20:41.946 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:41.946 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.946 ************************************ 00:20:41.946 END TEST raid_superblock_test 00:20:41.946 ************************************ 00:20:42.205 13:20:52 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:20:42.205 13:20:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:42.205 13:20:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:42.205 13:20:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:42.205 ************************************ 00:20:42.205 START TEST raid_read_error_test 00:20:42.205 ************************************ 00:20:42.205 13:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:20:42.205 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:20:42.205 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:20:42.205 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:20:42.205 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:20:42.205 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:42.205 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:20:42.205 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:42.205 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.Z4jD1GESe3 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=932345 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 932345 /var/tmp/spdk-raid.sock 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 932345 ']' 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:42.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:42.206 13:20:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.206 [2024-07-25 13:20:52.573078] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:20:42.206 [2024-07-25 13:20:52.573146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid932345 ] 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:42.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:42.206 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:42.466 [2024-07-25 13:20:52.704011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.466 [2024-07-25 13:20:52.790057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.466 [2024-07-25 13:20:52.849939] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:42.466 [2024-07-25 13:20:52.850009] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.032 13:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.032 13:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:20:43.032 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:43.032 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:43.291 BaseBdev1_malloc 00:20:43.291 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:43.550 true 00:20:43.550 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:43.808 [2024-07-25 13:20:54.115835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:43.808 [2024-07-25 13:20:54.115871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.808 [2024-07-25 13:20:54.115888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c1f1d0 00:20:43.808 [2024-07-25 13:20:54.115899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.808 [2024-07-25 13:20:54.117417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.808 [2024-07-25 13:20:54.117444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:43.808 BaseBdev1 00:20:43.808 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:43.808 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:44.067 BaseBdev2_malloc 00:20:44.067 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:44.326 true 00:20:44.326 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:44.584 [2024-07-25 13:20:54.818026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:44.584 [2024-07-25 13:20:54.818063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.584 [2024-07-25 13:20:54.818081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c22710 00:20:44.584 [2024-07-25 13:20:54.818092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.584 [2024-07-25 13:20:54.819441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.584 [2024-07-25 13:20:54.819467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:44.584 BaseBdev2 00:20:44.584 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:44.584 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:44.584 BaseBdev3_malloc 00:20:44.584 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:44.843 true 00:20:44.843 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:45.103 [2024-07-25 13:20:55.484055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:45.103 [2024-07-25 13:20:55.484094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.103 [2024-07-25 13:20:55.484119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c24de0 00:20:45.103 [2024-07-25 13:20:55.484131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.103 [2024-07-25 13:20:55.485509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.103 [2024-07-25 13:20:55.485537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:45.103 BaseBdev3 00:20:45.103 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:45.103 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:45.362 BaseBdev4_malloc 00:20:45.362 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:45.621 true 00:20:45.621 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:45.880 [2024-07-25 13:20:56.137961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:45.880 [2024-07-25 13:20:56.137996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.880 [2024-07-25 13:20:56.138016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c27130 00:20:45.880 [2024-07-25 13:20:56.138028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.880 [2024-07-25 13:20:56.139355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.880 [2024-07-25 13:20:56.139382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:45.880 BaseBdev4 00:20:45.880 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:45.880 [2024-07-25 13:20:56.358575] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:45.880 [2024-07-25 13:20:56.359749] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:45.880 [2024-07-25 13:20:56.359812] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:45.880 [2024-07-25 13:20:56.359864] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:45.880 [2024-07-25 13:20:56.360058] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c29790 00:20:45.880 [2024-07-25 13:20:56.360068] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:45.880 [2024-07-25 13:20:56.360264] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c2c8f0 00:20:45.880 [2024-07-25 13:20:56.360396] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c29790 00:20:45.880 [2024-07-25 13:20:56.360405] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c29790 00:20:45.880 [2024-07-25 13:20:56.360509] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.139 "name": "raid_bdev1", 00:20:46.139 "uuid": "9adc2edd-04da-4fbb-9c72-24b158f68f9e", 00:20:46.139 "strip_size_kb": 64, 00:20:46.139 "state": "online", 00:20:46.139 "raid_level": "concat", 00:20:46.139 "superblock": true, 00:20:46.139 "num_base_bdevs": 4, 00:20:46.139 "num_base_bdevs_discovered": 4, 00:20:46.139 "num_base_bdevs_operational": 4, 00:20:46.139 "base_bdevs_list": [ 00:20:46.139 { 00:20:46.139 "name": "BaseBdev1", 00:20:46.139 "uuid": "c32e4e0b-0ccf-575f-b75e-617e8339d922", 00:20:46.139 "is_configured": true, 00:20:46.139 "data_offset": 2048, 00:20:46.139 "data_size": 63488 00:20:46.139 }, 00:20:46.139 { 00:20:46.139 "name": "BaseBdev2", 00:20:46.139 "uuid": "1953c7e1-7623-532e-a67b-a2b95fba61f4", 00:20:46.139 "is_configured": true, 00:20:46.139 "data_offset": 2048, 00:20:46.139 "data_size": 63488 00:20:46.139 }, 00:20:46.139 { 00:20:46.139 "name": "BaseBdev3", 00:20:46.139 "uuid": "4191cdab-7679-5daf-ad0b-38d0934e7b6f", 00:20:46.139 "is_configured": true, 00:20:46.139 "data_offset": 2048, 00:20:46.139 "data_size": 63488 00:20:46.139 }, 00:20:46.139 { 00:20:46.139 "name": "BaseBdev4", 00:20:46.139 "uuid": "c909ff01-0160-5707-a3f7-cd61bf44401a", 00:20:46.139 "is_configured": true, 00:20:46.139 "data_offset": 2048, 00:20:46.139 "data_size": 63488 00:20:46.139 } 00:20:46.139 ] 00:20:46.139 }' 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.139 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.707 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:20:46.707 13:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:46.966 [2024-07-25 13:20:57.237132] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c29020 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.940 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.199 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.199 "name": "raid_bdev1", 00:20:48.199 "uuid": "9adc2edd-04da-4fbb-9c72-24b158f68f9e", 00:20:48.199 "strip_size_kb": 64, 00:20:48.199 "state": "online", 00:20:48.199 "raid_level": "concat", 00:20:48.199 "superblock": true, 00:20:48.199 "num_base_bdevs": 4, 00:20:48.199 "num_base_bdevs_discovered": 4, 00:20:48.199 "num_base_bdevs_operational": 4, 00:20:48.199 "base_bdevs_list": [ 00:20:48.199 { 00:20:48.199 "name": "BaseBdev1", 00:20:48.199 "uuid": "c32e4e0b-0ccf-575f-b75e-617e8339d922", 00:20:48.199 "is_configured": true, 00:20:48.199 "data_offset": 2048, 00:20:48.199 "data_size": 63488 00:20:48.199 }, 00:20:48.199 { 00:20:48.199 "name": "BaseBdev2", 00:20:48.199 "uuid": "1953c7e1-7623-532e-a67b-a2b95fba61f4", 00:20:48.199 "is_configured": true, 00:20:48.200 "data_offset": 2048, 00:20:48.200 "data_size": 63488 00:20:48.200 }, 00:20:48.200 { 00:20:48.200 "name": "BaseBdev3", 00:20:48.200 "uuid": "4191cdab-7679-5daf-ad0b-38d0934e7b6f", 00:20:48.200 "is_configured": true, 00:20:48.200 "data_offset": 2048, 00:20:48.200 "data_size": 63488 00:20:48.200 }, 00:20:48.200 { 00:20:48.200 "name": "BaseBdev4", 00:20:48.200 "uuid": "c909ff01-0160-5707-a3f7-cd61bf44401a", 00:20:48.200 "is_configured": true, 00:20:48.200 "data_offset": 2048, 00:20:48.200 "data_size": 63488 00:20:48.200 } 00:20:48.200 ] 00:20:48.200 }' 00:20:48.200 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.200 13:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.769 13:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:49.029 [2024-07-25 13:20:59.387691] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:49.029 [2024-07-25 13:20:59.387731] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:49.029 [2024-07-25 13:20:59.390632] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:49.029 [2024-07-25 13:20:59.390667] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.029 [2024-07-25 13:20:59.390703] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:49.029 [2024-07-25 13:20:59.390713] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c29790 name raid_bdev1, state offline 00:20:49.029 0 00:20:49.029 13:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 932345 00:20:49.029 13:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 932345 ']' 00:20:49.029 13:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 932345 00:20:49.029 13:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:20:49.029 13:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:49.029 13:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 932345 00:20:49.029 13:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:49.029 13:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:49.029 13:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 932345' 00:20:49.029 killing process with pid 932345 00:20:49.029 13:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 932345 00:20:49.029 [2024-07-25 13:20:59.466617] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:49.029 13:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 932345 00:20:49.029 [2024-07-25 13:20:59.492331] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:49.288 13:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:20:49.288 13:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.Z4jD1GESe3 00:20:49.288 13:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:20:49.288 13:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:20:49.288 13:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:20:49.288 13:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:49.288 13:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:49.288 13:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:20:49.288 00:20:49.288 real 0m7.197s 00:20:49.288 user 0m11.454s 00:20:49.288 sys 0m1.270s 00:20:49.288 13:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:49.288 13:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.288 ************************************ 00:20:49.288 END TEST raid_read_error_test 00:20:49.288 ************************************ 00:20:49.288 13:20:59 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:20:49.288 13:20:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:49.288 13:20:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:49.288 13:20:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.288 ************************************ 00:20:49.288 START TEST raid_write_error_test 00:20:49.288 ************************************ 00:20:49.289 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:20:49.289 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:20:49.289 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:20:49.289 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.0c5HHMZkns 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=933519 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 933519 /var/tmp/spdk-raid.sock 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 933519 ']' 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:49.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.549 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:49.549 [2024-07-25 13:20:59.847549] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:20:49.549 [2024-07-25 13:20:59.847609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid933519 ] 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.549 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:49.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.550 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:49.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.550 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:49.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.550 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:49.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.550 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:49.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.550 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:49.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.550 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:49.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.550 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:49.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.550 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:49.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.550 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:49.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:49.550 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:49.550 [2024-07-25 13:20:59.980989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.809 [2024-07-25 13:21:00.076975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.809 [2024-07-25 13:21:00.144372] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:49.809 [2024-07-25 13:21:00.144406] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.385 13:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:50.385 13:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:20:50.385 13:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:50.385 13:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:50.953 BaseBdev1_malloc 00:20:50.953 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:51.212 true 00:20:51.212 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:51.471 [2024-07-25 13:21:01.955039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:51.471 [2024-07-25 13:21:01.955081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.471 [2024-07-25 13:21:01.955099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26441d0 00:20:51.471 [2024-07-25 13:21:01.955112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.471 [2024-07-25 13:21:01.956774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.471 [2024-07-25 13:21:01.956802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:51.730 BaseBdev1 00:20:51.730 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:51.730 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:51.989 BaseBdev2_malloc 00:20:52.247 13:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:52.247 true 00:20:52.247 13:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:52.816 [2024-07-25 13:21:03.190480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:52.816 [2024-07-25 13:21:03.190521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.816 [2024-07-25 13:21:03.190539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2647710 00:20:52.816 [2024-07-25 13:21:03.190551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.816 [2024-07-25 13:21:03.191996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.816 [2024-07-25 13:21:03.192023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:52.816 BaseBdev2 00:20:52.816 13:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:52.816 13:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:53.075 BaseBdev3_malloc 00:20:53.075 13:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:53.643 true 00:20:53.643 13:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:53.902 [2024-07-25 13:21:04.173386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:53.902 [2024-07-25 13:21:04.173426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.902 [2024-07-25 13:21:04.173446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2649de0 00:20:53.902 [2024-07-25 13:21:04.173458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.902 [2024-07-25 13:21:04.174886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.903 [2024-07-25 13:21:04.174913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:53.903 BaseBdev3 00:20:53.903 13:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:53.903 13:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:54.162 BaseBdev4_malloc 00:20:54.162 13:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:54.421 true 00:20:54.680 13:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:54.680 [2024-07-25 13:21:05.132074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:54.680 [2024-07-25 13:21:05.132116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.680 [2024-07-25 13:21:05.132137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x264c130 00:20:54.680 [2024-07-25 13:21:05.132156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.680 [2024-07-25 13:21:05.133593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.680 [2024-07-25 13:21:05.133619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:54.680 BaseBdev4 00:20:54.680 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:54.940 [2024-07-25 13:21:05.356698] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.940 [2024-07-25 13:21:05.357922] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:54.940 [2024-07-25 13:21:05.357985] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:54.940 [2024-07-25 13:21:05.358037] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:54.940 [2024-07-25 13:21:05.358250] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x264e790 00:20:54.940 [2024-07-25 13:21:05.358261] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:54.940 [2024-07-25 13:21:05.358452] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26518f0 00:20:54.940 [2024-07-25 13:21:05.358585] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x264e790 00:20:54.940 [2024-07-25 13:21:05.358594] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x264e790 00:20:54.940 [2024-07-25 13:21:05.358701] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.940 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:54.940 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:54.940 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:54.940 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:54.940 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:54.940 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:54.940 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.940 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.940 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.940 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.940 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.940 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.199 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:55.199 "name": "raid_bdev1", 00:20:55.199 "uuid": "e1ad7eb5-93aa-4783-b1ac-45ea8b393f3e", 00:20:55.199 "strip_size_kb": 64, 00:20:55.199 "state": "online", 00:20:55.199 "raid_level": "concat", 00:20:55.199 "superblock": true, 00:20:55.199 "num_base_bdevs": 4, 00:20:55.199 "num_base_bdevs_discovered": 4, 00:20:55.199 "num_base_bdevs_operational": 4, 00:20:55.199 "base_bdevs_list": [ 00:20:55.199 { 00:20:55.199 "name": "BaseBdev1", 00:20:55.199 "uuid": "68a061ff-b5ec-5844-b8cf-df29f7ff4c99", 00:20:55.200 "is_configured": true, 00:20:55.200 "data_offset": 2048, 00:20:55.200 "data_size": 63488 00:20:55.200 }, 00:20:55.200 { 00:20:55.200 "name": "BaseBdev2", 00:20:55.200 "uuid": "6d64c6ef-9354-5417-beab-db6f3059b528", 00:20:55.200 "is_configured": true, 00:20:55.200 "data_offset": 2048, 00:20:55.200 "data_size": 63488 00:20:55.200 }, 00:20:55.200 { 00:20:55.200 "name": "BaseBdev3", 00:20:55.200 "uuid": "ed13f794-20f6-56f5-92a4-3aa491379d5e", 00:20:55.200 "is_configured": true, 00:20:55.200 "data_offset": 2048, 00:20:55.200 "data_size": 63488 00:20:55.200 }, 00:20:55.200 { 00:20:55.200 "name": "BaseBdev4", 00:20:55.200 "uuid": "e5811429-1e3a-5e2f-b2b8-a41904437556", 00:20:55.200 "is_configured": true, 00:20:55.200 "data_offset": 2048, 00:20:55.200 "data_size": 63488 00:20:55.200 } 00:20:55.200 ] 00:20:55.200 }' 00:20:55.200 13:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:55.200 13:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.767 13:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:20:55.767 13:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:56.027 [2024-07-25 13:21:06.355571] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x264e020 00:20:56.965 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:57.224 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:20:57.224 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:20:57.224 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:20:57.224 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:57.224 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:57.224 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:57.224 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:57.224 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:57.224 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:57.224 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:57.224 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:57.225 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:57.225 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:57.225 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.225 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.485 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:57.485 "name": "raid_bdev1", 00:20:57.485 "uuid": "e1ad7eb5-93aa-4783-b1ac-45ea8b393f3e", 00:20:57.485 "strip_size_kb": 64, 00:20:57.485 "state": "online", 00:20:57.485 "raid_level": "concat", 00:20:57.485 "superblock": true, 00:20:57.485 "num_base_bdevs": 4, 00:20:57.485 "num_base_bdevs_discovered": 4, 00:20:57.485 "num_base_bdevs_operational": 4, 00:20:57.485 "base_bdevs_list": [ 00:20:57.485 { 00:20:57.485 "name": "BaseBdev1", 00:20:57.485 "uuid": "68a061ff-b5ec-5844-b8cf-df29f7ff4c99", 00:20:57.485 "is_configured": true, 00:20:57.485 "data_offset": 2048, 00:20:57.485 "data_size": 63488 00:20:57.485 }, 00:20:57.485 { 00:20:57.485 "name": "BaseBdev2", 00:20:57.485 "uuid": "6d64c6ef-9354-5417-beab-db6f3059b528", 00:20:57.485 "is_configured": true, 00:20:57.485 "data_offset": 2048, 00:20:57.485 "data_size": 63488 00:20:57.485 }, 00:20:57.485 { 00:20:57.485 "name": "BaseBdev3", 00:20:57.485 "uuid": "ed13f794-20f6-56f5-92a4-3aa491379d5e", 00:20:57.485 "is_configured": true, 00:20:57.485 "data_offset": 2048, 00:20:57.485 "data_size": 63488 00:20:57.485 }, 00:20:57.485 { 00:20:57.485 "name": "BaseBdev4", 00:20:57.485 "uuid": "e5811429-1e3a-5e2f-b2b8-a41904437556", 00:20:57.485 "is_configured": true, 00:20:57.485 "data_offset": 2048, 00:20:57.485 "data_size": 63488 00:20:57.485 } 00:20:57.485 ] 00:20:57.485 }' 00:20:57.485 13:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:57.485 13:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.053 13:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:58.313 [2024-07-25 13:21:08.616568] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:58.313 [2024-07-25 13:21:08.616601] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:58.313 [2024-07-25 13:21:08.619511] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:58.313 [2024-07-25 13:21:08.619547] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.313 [2024-07-25 13:21:08.619583] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:58.313 [2024-07-25 13:21:08.619593] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x264e790 name raid_bdev1, state offline 00:20:58.313 0 00:20:58.313 13:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 933519 00:20:58.313 13:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 933519 ']' 00:20:58.313 13:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 933519 00:20:58.313 13:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:20:58.313 13:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:58.313 13:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 933519 00:20:58.313 13:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:58.313 13:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:58.313 13:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 933519' 00:20:58.313 killing process with pid 933519 00:20:58.313 13:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 933519 00:20:58.313 [2024-07-25 13:21:08.691186] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:58.313 13:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 933519 00:20:58.313 [2024-07-25 13:21:08.716646] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:58.572 13:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.0c5HHMZkns 00:20:58.572 13:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:20:58.572 13:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:20:58.572 13:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.44 00:20:58.572 13:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:20:58.572 13:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:58.572 13:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:58.572 13:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.44 != \0\.\0\0 ]] 00:20:58.572 00:20:58.572 real 0m9.142s 00:20:58.572 user 0m15.186s 00:20:58.572 sys 0m1.513s 00:20:58.572 13:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:58.572 13:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.572 ************************************ 00:20:58.572 END TEST raid_write_error_test 00:20:58.572 ************************************ 00:20:58.572 13:21:08 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:20:58.573 13:21:08 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:20:58.573 13:21:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:58.573 13:21:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:58.573 13:21:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.573 ************************************ 00:20:58.573 START TEST raid_state_function_test 00:20:58.573 ************************************ 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=935733 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 935733' 00:20:58.573 Process raid pid: 935733 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 935733 /var/tmp/spdk-raid.sock 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 935733 ']' 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:58.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.573 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.833 [2024-07-25 13:21:09.075946] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:20:58.833 [2024-07-25 13:21:09.076001] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:58.833 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.833 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:58.833 [2024-07-25 13:21:09.208881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.833 [2024-07-25 13:21:09.294878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.092 [2024-07-25 13:21:09.356877] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.092 [2024-07-25 13:21:09.356909] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.659 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.659 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:20:59.660 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:59.918 [2024-07-25 13:21:10.170847] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:59.918 [2024-07-25 13:21:10.170887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:59.918 [2024-07-25 13:21:10.170897] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:59.918 [2024-07-25 13:21:10.170908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:59.918 [2024-07-25 13:21:10.170916] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:59.918 [2024-07-25 13:21:10.170926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:59.918 [2024-07-25 13:21:10.170934] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:59.918 [2024-07-25 13:21:10.170944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:59.918 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:59.918 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:59.918 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:59.918 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:59.918 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:59.918 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:59.918 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:59.918 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:59.918 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:59.918 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:59.918 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.918 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.177 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.177 "name": "Existed_Raid", 00:21:00.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.177 "strip_size_kb": 0, 00:21:00.177 "state": "configuring", 00:21:00.177 "raid_level": "raid1", 00:21:00.177 "superblock": false, 00:21:00.177 "num_base_bdevs": 4, 00:21:00.177 "num_base_bdevs_discovered": 0, 00:21:00.177 "num_base_bdevs_operational": 4, 00:21:00.177 "base_bdevs_list": [ 00:21:00.177 { 00:21:00.177 "name": "BaseBdev1", 00:21:00.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.177 "is_configured": false, 00:21:00.177 "data_offset": 0, 00:21:00.177 "data_size": 0 00:21:00.177 }, 00:21:00.177 { 00:21:00.177 "name": "BaseBdev2", 00:21:00.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.177 "is_configured": false, 00:21:00.177 "data_offset": 0, 00:21:00.177 "data_size": 0 00:21:00.177 }, 00:21:00.177 { 00:21:00.177 "name": "BaseBdev3", 00:21:00.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.177 "is_configured": false, 00:21:00.177 "data_offset": 0, 00:21:00.177 "data_size": 0 00:21:00.177 }, 00:21:00.177 { 00:21:00.177 "name": "BaseBdev4", 00:21:00.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.177 "is_configured": false, 00:21:00.177 "data_offset": 0, 00:21:00.177 "data_size": 0 00:21:00.177 } 00:21:00.177 ] 00:21:00.177 }' 00:21:00.177 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.177 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.747 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:00.747 [2024-07-25 13:21:11.209482] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:00.747 [2024-07-25 13:21:11.209514] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ae1f60 name Existed_Raid, state configuring 00:21:00.747 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:01.065 [2024-07-25 13:21:11.438092] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:01.065 [2024-07-25 13:21:11.438119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:01.065 [2024-07-25 13:21:11.438128] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:01.065 [2024-07-25 13:21:11.438144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:01.065 [2024-07-25 13:21:11.438153] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:01.065 [2024-07-25 13:21:11.438164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:01.065 [2024-07-25 13:21:11.438172] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:01.065 [2024-07-25 13:21:11.438183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:01.066 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:01.324 [2024-07-25 13:21:11.688245] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:01.324 BaseBdev1 00:21:01.324 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:01.324 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:01.324 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:01.324 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:01.324 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:01.324 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:01.324 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:01.584 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:01.844 [ 00:21:01.844 { 00:21:01.844 "name": "BaseBdev1", 00:21:01.844 "aliases": [ 00:21:01.844 "d2a3a8f0-5657-4150-98d5-d894fa98a760" 00:21:01.844 ], 00:21:01.844 "product_name": "Malloc disk", 00:21:01.844 "block_size": 512, 00:21:01.844 "num_blocks": 65536, 00:21:01.844 "uuid": "d2a3a8f0-5657-4150-98d5-d894fa98a760", 00:21:01.844 "assigned_rate_limits": { 00:21:01.844 "rw_ios_per_sec": 0, 00:21:01.844 "rw_mbytes_per_sec": 0, 00:21:01.844 "r_mbytes_per_sec": 0, 00:21:01.844 "w_mbytes_per_sec": 0 00:21:01.844 }, 00:21:01.844 "claimed": true, 00:21:01.844 "claim_type": "exclusive_write", 00:21:01.844 "zoned": false, 00:21:01.844 "supported_io_types": { 00:21:01.844 "read": true, 00:21:01.844 "write": true, 00:21:01.844 "unmap": true, 00:21:01.844 "flush": true, 00:21:01.844 "reset": true, 00:21:01.844 "nvme_admin": false, 00:21:01.844 "nvme_io": false, 00:21:01.844 "nvme_io_md": false, 00:21:01.844 "write_zeroes": true, 00:21:01.844 "zcopy": true, 00:21:01.844 "get_zone_info": false, 00:21:01.844 "zone_management": false, 00:21:01.844 "zone_append": false, 00:21:01.844 "compare": false, 00:21:01.844 "compare_and_write": false, 00:21:01.844 "abort": true, 00:21:01.844 "seek_hole": false, 00:21:01.844 "seek_data": false, 00:21:01.844 "copy": true, 00:21:01.844 "nvme_iov_md": false 00:21:01.844 }, 00:21:01.844 "memory_domains": [ 00:21:01.844 { 00:21:01.844 "dma_device_id": "system", 00:21:01.844 "dma_device_type": 1 00:21:01.844 }, 00:21:01.844 { 00:21:01.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.844 "dma_device_type": 2 00:21:01.844 } 00:21:01.844 ], 00:21:01.844 "driver_specific": {} 00:21:01.844 } 00:21:01.844 ] 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.844 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.103 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.103 "name": "Existed_Raid", 00:21:02.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.103 "strip_size_kb": 0, 00:21:02.103 "state": "configuring", 00:21:02.104 "raid_level": "raid1", 00:21:02.104 "superblock": false, 00:21:02.104 "num_base_bdevs": 4, 00:21:02.104 "num_base_bdevs_discovered": 1, 00:21:02.104 "num_base_bdevs_operational": 4, 00:21:02.104 "base_bdevs_list": [ 00:21:02.104 { 00:21:02.104 "name": "BaseBdev1", 00:21:02.104 "uuid": "d2a3a8f0-5657-4150-98d5-d894fa98a760", 00:21:02.104 "is_configured": true, 00:21:02.104 "data_offset": 0, 00:21:02.104 "data_size": 65536 00:21:02.104 }, 00:21:02.104 { 00:21:02.104 "name": "BaseBdev2", 00:21:02.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.104 "is_configured": false, 00:21:02.104 "data_offset": 0, 00:21:02.104 "data_size": 0 00:21:02.104 }, 00:21:02.104 { 00:21:02.104 "name": "BaseBdev3", 00:21:02.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.104 "is_configured": false, 00:21:02.104 "data_offset": 0, 00:21:02.104 "data_size": 0 00:21:02.104 }, 00:21:02.104 { 00:21:02.104 "name": "BaseBdev4", 00:21:02.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.104 "is_configured": false, 00:21:02.104 "data_offset": 0, 00:21:02.104 "data_size": 0 00:21:02.104 } 00:21:02.104 ] 00:21:02.104 }' 00:21:02.104 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.104 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.672 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:02.932 [2024-07-25 13:21:13.164122] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:02.932 [2024-07-25 13:21:13.164161] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ae17d0 name Existed_Raid, state configuring 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:02.932 [2024-07-25 13:21:13.388747] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:02.932 [2024-07-25 13:21:13.390122] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:02.932 [2024-07-25 13:21:13.390158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:02.932 [2024-07-25 13:21:13.390168] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:02.932 [2024-07-25 13:21:13.390179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:02.932 [2024-07-25 13:21:13.390188] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:02.932 [2024-07-25 13:21:13.390198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.932 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:03.191 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.191 "name": "Existed_Raid", 00:21:03.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.191 "strip_size_kb": 0, 00:21:03.191 "state": "configuring", 00:21:03.191 "raid_level": "raid1", 00:21:03.191 "superblock": false, 00:21:03.191 "num_base_bdevs": 4, 00:21:03.191 "num_base_bdevs_discovered": 1, 00:21:03.191 "num_base_bdevs_operational": 4, 00:21:03.191 "base_bdevs_list": [ 00:21:03.191 { 00:21:03.191 "name": "BaseBdev1", 00:21:03.191 "uuid": "d2a3a8f0-5657-4150-98d5-d894fa98a760", 00:21:03.191 "is_configured": true, 00:21:03.191 "data_offset": 0, 00:21:03.191 "data_size": 65536 00:21:03.191 }, 00:21:03.191 { 00:21:03.191 "name": "BaseBdev2", 00:21:03.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.191 "is_configured": false, 00:21:03.191 "data_offset": 0, 00:21:03.191 "data_size": 0 00:21:03.191 }, 00:21:03.191 { 00:21:03.191 "name": "BaseBdev3", 00:21:03.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.191 "is_configured": false, 00:21:03.191 "data_offset": 0, 00:21:03.191 "data_size": 0 00:21:03.191 }, 00:21:03.191 { 00:21:03.191 "name": "BaseBdev4", 00:21:03.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.191 "is_configured": false, 00:21:03.191 "data_offset": 0, 00:21:03.191 "data_size": 0 00:21:03.191 } 00:21:03.191 ] 00:21:03.191 }' 00:21:03.191 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.191 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.760 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:04.019 [2024-07-25 13:21:14.414483] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:04.019 BaseBdev2 00:21:04.019 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:04.019 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:04.019 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:04.019 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:04.019 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:04.019 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:04.019 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:04.278 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:04.536 [ 00:21:04.536 { 00:21:04.536 "name": "BaseBdev2", 00:21:04.536 "aliases": [ 00:21:04.536 "d453ae44-9c0e-4995-9586-bfc78dc4117d" 00:21:04.536 ], 00:21:04.536 "product_name": "Malloc disk", 00:21:04.536 "block_size": 512, 00:21:04.536 "num_blocks": 65536, 00:21:04.536 "uuid": "d453ae44-9c0e-4995-9586-bfc78dc4117d", 00:21:04.536 "assigned_rate_limits": { 00:21:04.536 "rw_ios_per_sec": 0, 00:21:04.536 "rw_mbytes_per_sec": 0, 00:21:04.536 "r_mbytes_per_sec": 0, 00:21:04.536 "w_mbytes_per_sec": 0 00:21:04.536 }, 00:21:04.536 "claimed": true, 00:21:04.536 "claim_type": "exclusive_write", 00:21:04.536 "zoned": false, 00:21:04.536 "supported_io_types": { 00:21:04.536 "read": true, 00:21:04.536 "write": true, 00:21:04.536 "unmap": true, 00:21:04.536 "flush": true, 00:21:04.536 "reset": true, 00:21:04.536 "nvme_admin": false, 00:21:04.536 "nvme_io": false, 00:21:04.536 "nvme_io_md": false, 00:21:04.536 "write_zeroes": true, 00:21:04.536 "zcopy": true, 00:21:04.536 "get_zone_info": false, 00:21:04.536 "zone_management": false, 00:21:04.536 "zone_append": false, 00:21:04.536 "compare": false, 00:21:04.536 "compare_and_write": false, 00:21:04.536 "abort": true, 00:21:04.536 "seek_hole": false, 00:21:04.536 "seek_data": false, 00:21:04.536 "copy": true, 00:21:04.536 "nvme_iov_md": false 00:21:04.536 }, 00:21:04.536 "memory_domains": [ 00:21:04.536 { 00:21:04.536 "dma_device_id": "system", 00:21:04.536 "dma_device_type": 1 00:21:04.536 }, 00:21:04.536 { 00:21:04.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.536 "dma_device_type": 2 00:21:04.536 } 00:21:04.536 ], 00:21:04.536 "driver_specific": {} 00:21:04.536 } 00:21:04.536 ] 00:21:04.536 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:04.536 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:04.536 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:04.536 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:04.536 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:04.536 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:04.536 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:04.536 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:04.536 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:04.537 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.537 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.537 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.537 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.537 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.537 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.795 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:04.795 "name": "Existed_Raid", 00:21:04.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.795 "strip_size_kb": 0, 00:21:04.795 "state": "configuring", 00:21:04.795 "raid_level": "raid1", 00:21:04.795 "superblock": false, 00:21:04.795 "num_base_bdevs": 4, 00:21:04.795 "num_base_bdevs_discovered": 2, 00:21:04.795 "num_base_bdevs_operational": 4, 00:21:04.795 "base_bdevs_list": [ 00:21:04.795 { 00:21:04.795 "name": "BaseBdev1", 00:21:04.795 "uuid": "d2a3a8f0-5657-4150-98d5-d894fa98a760", 00:21:04.795 "is_configured": true, 00:21:04.795 "data_offset": 0, 00:21:04.795 "data_size": 65536 00:21:04.795 }, 00:21:04.795 { 00:21:04.795 "name": "BaseBdev2", 00:21:04.795 "uuid": "d453ae44-9c0e-4995-9586-bfc78dc4117d", 00:21:04.795 "is_configured": true, 00:21:04.795 "data_offset": 0, 00:21:04.795 "data_size": 65536 00:21:04.795 }, 00:21:04.795 { 00:21:04.795 "name": "BaseBdev3", 00:21:04.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.795 "is_configured": false, 00:21:04.795 "data_offset": 0, 00:21:04.795 "data_size": 0 00:21:04.795 }, 00:21:04.795 { 00:21:04.795 "name": "BaseBdev4", 00:21:04.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.795 "is_configured": false, 00:21:04.795 "data_offset": 0, 00:21:04.795 "data_size": 0 00:21:04.795 } 00:21:04.795 ] 00:21:04.795 }' 00:21:04.795 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:04.795 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.360 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:05.619 [2024-07-25 13:21:15.909539] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:05.619 BaseBdev3 00:21:05.619 13:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:05.619 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:05.619 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:05.619 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:05.619 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:05.619 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:05.619 13:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:05.878 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:05.878 [ 00:21:05.878 { 00:21:05.878 "name": "BaseBdev3", 00:21:05.878 "aliases": [ 00:21:05.878 "3d6d4a42-a036-4b33-b8a4-b3d4b072ba3f" 00:21:05.878 ], 00:21:05.878 "product_name": "Malloc disk", 00:21:05.878 "block_size": 512, 00:21:05.878 "num_blocks": 65536, 00:21:05.878 "uuid": "3d6d4a42-a036-4b33-b8a4-b3d4b072ba3f", 00:21:05.878 "assigned_rate_limits": { 00:21:05.878 "rw_ios_per_sec": 0, 00:21:05.878 "rw_mbytes_per_sec": 0, 00:21:05.878 "r_mbytes_per_sec": 0, 00:21:05.878 "w_mbytes_per_sec": 0 00:21:05.878 }, 00:21:05.878 "claimed": true, 00:21:05.878 "claim_type": "exclusive_write", 00:21:05.878 "zoned": false, 00:21:05.878 "supported_io_types": { 00:21:05.878 "read": true, 00:21:05.878 "write": true, 00:21:05.878 "unmap": true, 00:21:05.878 "flush": true, 00:21:05.878 "reset": true, 00:21:05.878 "nvme_admin": false, 00:21:05.878 "nvme_io": false, 00:21:05.878 "nvme_io_md": false, 00:21:05.878 "write_zeroes": true, 00:21:05.878 "zcopy": true, 00:21:05.878 "get_zone_info": false, 00:21:05.878 "zone_management": false, 00:21:05.878 "zone_append": false, 00:21:05.878 "compare": false, 00:21:05.878 "compare_and_write": false, 00:21:05.878 "abort": true, 00:21:05.878 "seek_hole": false, 00:21:05.878 "seek_data": false, 00:21:05.878 "copy": true, 00:21:05.878 "nvme_iov_md": false 00:21:05.878 }, 00:21:05.878 "memory_domains": [ 00:21:05.878 { 00:21:05.878 "dma_device_id": "system", 00:21:05.878 "dma_device_type": 1 00:21:05.878 }, 00:21:05.878 { 00:21:05.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.878 "dma_device_type": 2 00:21:05.878 } 00:21:05.878 ], 00:21:05.878 "driver_specific": {} 00:21:05.878 } 00:21:05.878 ] 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.138 "name": "Existed_Raid", 00:21:06.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.138 "strip_size_kb": 0, 00:21:06.138 "state": "configuring", 00:21:06.138 "raid_level": "raid1", 00:21:06.138 "superblock": false, 00:21:06.138 "num_base_bdevs": 4, 00:21:06.138 "num_base_bdevs_discovered": 3, 00:21:06.138 "num_base_bdevs_operational": 4, 00:21:06.138 "base_bdevs_list": [ 00:21:06.138 { 00:21:06.138 "name": "BaseBdev1", 00:21:06.138 "uuid": "d2a3a8f0-5657-4150-98d5-d894fa98a760", 00:21:06.138 "is_configured": true, 00:21:06.138 "data_offset": 0, 00:21:06.138 "data_size": 65536 00:21:06.138 }, 00:21:06.138 { 00:21:06.138 "name": "BaseBdev2", 00:21:06.138 "uuid": "d453ae44-9c0e-4995-9586-bfc78dc4117d", 00:21:06.138 "is_configured": true, 00:21:06.138 "data_offset": 0, 00:21:06.138 "data_size": 65536 00:21:06.138 }, 00:21:06.138 { 00:21:06.138 "name": "BaseBdev3", 00:21:06.138 "uuid": "3d6d4a42-a036-4b33-b8a4-b3d4b072ba3f", 00:21:06.138 "is_configured": true, 00:21:06.138 "data_offset": 0, 00:21:06.138 "data_size": 65536 00:21:06.138 }, 00:21:06.138 { 00:21:06.138 "name": "BaseBdev4", 00:21:06.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.138 "is_configured": false, 00:21:06.138 "data_offset": 0, 00:21:06.138 "data_size": 0 00:21:06.138 } 00:21:06.138 ] 00:21:06.138 }' 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.138 13:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.705 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:06.981 [2024-07-25 13:21:17.384676] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:06.981 [2024-07-25 13:21:17.384706] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ae2840 00:21:06.981 [2024-07-25 13:21:17.384714] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:06.981 [2024-07-25 13:21:17.384896] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ae2480 00:21:06.981 [2024-07-25 13:21:17.385015] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ae2840 00:21:06.981 [2024-07-25 13:21:17.385024] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1ae2840 00:21:06.981 [2024-07-25 13:21:17.385179] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.981 BaseBdev4 00:21:06.981 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:06.981 13:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:06.981 13:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:06.981 13:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:06.981 13:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:06.981 13:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:06.981 13:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:07.241 13:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:07.500 [ 00:21:07.500 { 00:21:07.500 "name": "BaseBdev4", 00:21:07.500 "aliases": [ 00:21:07.500 "2b7fdacb-d8ed-40b5-aca4-241934eb8f3a" 00:21:07.500 ], 00:21:07.500 "product_name": "Malloc disk", 00:21:07.500 "block_size": 512, 00:21:07.500 "num_blocks": 65536, 00:21:07.500 "uuid": "2b7fdacb-d8ed-40b5-aca4-241934eb8f3a", 00:21:07.500 "assigned_rate_limits": { 00:21:07.500 "rw_ios_per_sec": 0, 00:21:07.500 "rw_mbytes_per_sec": 0, 00:21:07.500 "r_mbytes_per_sec": 0, 00:21:07.500 "w_mbytes_per_sec": 0 00:21:07.500 }, 00:21:07.500 "claimed": true, 00:21:07.500 "claim_type": "exclusive_write", 00:21:07.500 "zoned": false, 00:21:07.500 "supported_io_types": { 00:21:07.500 "read": true, 00:21:07.500 "write": true, 00:21:07.500 "unmap": true, 00:21:07.500 "flush": true, 00:21:07.500 "reset": true, 00:21:07.500 "nvme_admin": false, 00:21:07.500 "nvme_io": false, 00:21:07.500 "nvme_io_md": false, 00:21:07.500 "write_zeroes": true, 00:21:07.500 "zcopy": true, 00:21:07.500 "get_zone_info": false, 00:21:07.500 "zone_management": false, 00:21:07.500 "zone_append": false, 00:21:07.500 "compare": false, 00:21:07.500 "compare_and_write": false, 00:21:07.500 "abort": true, 00:21:07.500 "seek_hole": false, 00:21:07.500 "seek_data": false, 00:21:07.500 "copy": true, 00:21:07.500 "nvme_iov_md": false 00:21:07.500 }, 00:21:07.500 "memory_domains": [ 00:21:07.500 { 00:21:07.500 "dma_device_id": "system", 00:21:07.500 "dma_device_type": 1 00:21:07.500 }, 00:21:07.500 { 00:21:07.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.500 "dma_device_type": 2 00:21:07.500 } 00:21:07.500 ], 00:21:07.500 "driver_specific": {} 00:21:07.500 } 00:21:07.500 ] 00:21:07.500 13:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:07.500 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:07.500 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:07.500 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:07.500 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:07.500 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:07.500 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:07.501 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:07.501 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:07.501 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:07.501 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:07.501 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:07.501 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:07.501 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.501 13:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.760 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.760 "name": "Existed_Raid", 00:21:07.760 "uuid": "ee0082f4-aece-4d0a-9494-fd2b76eef6f5", 00:21:07.760 "strip_size_kb": 0, 00:21:07.760 "state": "online", 00:21:07.760 "raid_level": "raid1", 00:21:07.760 "superblock": false, 00:21:07.760 "num_base_bdevs": 4, 00:21:07.760 "num_base_bdevs_discovered": 4, 00:21:07.760 "num_base_bdevs_operational": 4, 00:21:07.760 "base_bdevs_list": [ 00:21:07.760 { 00:21:07.760 "name": "BaseBdev1", 00:21:07.760 "uuid": "d2a3a8f0-5657-4150-98d5-d894fa98a760", 00:21:07.760 "is_configured": true, 00:21:07.760 "data_offset": 0, 00:21:07.760 "data_size": 65536 00:21:07.760 }, 00:21:07.760 { 00:21:07.760 "name": "BaseBdev2", 00:21:07.760 "uuid": "d453ae44-9c0e-4995-9586-bfc78dc4117d", 00:21:07.760 "is_configured": true, 00:21:07.760 "data_offset": 0, 00:21:07.760 "data_size": 65536 00:21:07.760 }, 00:21:07.760 { 00:21:07.760 "name": "BaseBdev3", 00:21:07.760 "uuid": "3d6d4a42-a036-4b33-b8a4-b3d4b072ba3f", 00:21:07.760 "is_configured": true, 00:21:07.760 "data_offset": 0, 00:21:07.760 "data_size": 65536 00:21:07.760 }, 00:21:07.760 { 00:21:07.760 "name": "BaseBdev4", 00:21:07.760 "uuid": "2b7fdacb-d8ed-40b5-aca4-241934eb8f3a", 00:21:07.760 "is_configured": true, 00:21:07.760 "data_offset": 0, 00:21:07.760 "data_size": 65536 00:21:07.760 } 00:21:07.760 ] 00:21:07.760 }' 00:21:07.760 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.760 13:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.327 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:08.327 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:08.327 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:08.327 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:08.327 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:08.327 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:08.327 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:08.327 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:08.586 [2024-07-25 13:21:18.876924] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:08.586 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:08.586 "name": "Existed_Raid", 00:21:08.586 "aliases": [ 00:21:08.586 "ee0082f4-aece-4d0a-9494-fd2b76eef6f5" 00:21:08.586 ], 00:21:08.586 "product_name": "Raid Volume", 00:21:08.586 "block_size": 512, 00:21:08.586 "num_blocks": 65536, 00:21:08.586 "uuid": "ee0082f4-aece-4d0a-9494-fd2b76eef6f5", 00:21:08.586 "assigned_rate_limits": { 00:21:08.586 "rw_ios_per_sec": 0, 00:21:08.586 "rw_mbytes_per_sec": 0, 00:21:08.586 "r_mbytes_per_sec": 0, 00:21:08.586 "w_mbytes_per_sec": 0 00:21:08.586 }, 00:21:08.586 "claimed": false, 00:21:08.586 "zoned": false, 00:21:08.586 "supported_io_types": { 00:21:08.586 "read": true, 00:21:08.586 "write": true, 00:21:08.586 "unmap": false, 00:21:08.586 "flush": false, 00:21:08.586 "reset": true, 00:21:08.586 "nvme_admin": false, 00:21:08.586 "nvme_io": false, 00:21:08.586 "nvme_io_md": false, 00:21:08.586 "write_zeroes": true, 00:21:08.586 "zcopy": false, 00:21:08.586 "get_zone_info": false, 00:21:08.586 "zone_management": false, 00:21:08.586 "zone_append": false, 00:21:08.586 "compare": false, 00:21:08.586 "compare_and_write": false, 00:21:08.586 "abort": false, 00:21:08.586 "seek_hole": false, 00:21:08.586 "seek_data": false, 00:21:08.586 "copy": false, 00:21:08.586 "nvme_iov_md": false 00:21:08.586 }, 00:21:08.586 "memory_domains": [ 00:21:08.586 { 00:21:08.586 "dma_device_id": "system", 00:21:08.586 "dma_device_type": 1 00:21:08.586 }, 00:21:08.586 { 00:21:08.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.586 "dma_device_type": 2 00:21:08.586 }, 00:21:08.586 { 00:21:08.586 "dma_device_id": "system", 00:21:08.586 "dma_device_type": 1 00:21:08.586 }, 00:21:08.586 { 00:21:08.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.586 "dma_device_type": 2 00:21:08.586 }, 00:21:08.586 { 00:21:08.586 "dma_device_id": "system", 00:21:08.586 "dma_device_type": 1 00:21:08.586 }, 00:21:08.586 { 00:21:08.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.586 "dma_device_type": 2 00:21:08.586 }, 00:21:08.586 { 00:21:08.586 "dma_device_id": "system", 00:21:08.586 "dma_device_type": 1 00:21:08.586 }, 00:21:08.586 { 00:21:08.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.586 "dma_device_type": 2 00:21:08.586 } 00:21:08.586 ], 00:21:08.586 "driver_specific": { 00:21:08.586 "raid": { 00:21:08.586 "uuid": "ee0082f4-aece-4d0a-9494-fd2b76eef6f5", 00:21:08.586 "strip_size_kb": 0, 00:21:08.586 "state": "online", 00:21:08.586 "raid_level": "raid1", 00:21:08.586 "superblock": false, 00:21:08.586 "num_base_bdevs": 4, 00:21:08.586 "num_base_bdevs_discovered": 4, 00:21:08.586 "num_base_bdevs_operational": 4, 00:21:08.586 "base_bdevs_list": [ 00:21:08.586 { 00:21:08.586 "name": "BaseBdev1", 00:21:08.586 "uuid": "d2a3a8f0-5657-4150-98d5-d894fa98a760", 00:21:08.586 "is_configured": true, 00:21:08.586 "data_offset": 0, 00:21:08.586 "data_size": 65536 00:21:08.586 }, 00:21:08.586 { 00:21:08.587 "name": "BaseBdev2", 00:21:08.587 "uuid": "d453ae44-9c0e-4995-9586-bfc78dc4117d", 00:21:08.587 "is_configured": true, 00:21:08.587 "data_offset": 0, 00:21:08.587 "data_size": 65536 00:21:08.587 }, 00:21:08.587 { 00:21:08.587 "name": "BaseBdev3", 00:21:08.587 "uuid": "3d6d4a42-a036-4b33-b8a4-b3d4b072ba3f", 00:21:08.587 "is_configured": true, 00:21:08.587 "data_offset": 0, 00:21:08.587 "data_size": 65536 00:21:08.587 }, 00:21:08.587 { 00:21:08.587 "name": "BaseBdev4", 00:21:08.587 "uuid": "2b7fdacb-d8ed-40b5-aca4-241934eb8f3a", 00:21:08.587 "is_configured": true, 00:21:08.587 "data_offset": 0, 00:21:08.587 "data_size": 65536 00:21:08.587 } 00:21:08.587 ] 00:21:08.587 } 00:21:08.587 } 00:21:08.587 }' 00:21:08.587 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:08.587 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:08.587 BaseBdev2 00:21:08.587 BaseBdev3 00:21:08.587 BaseBdev4' 00:21:08.587 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:08.587 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:08.587 13:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:08.846 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:08.846 "name": "BaseBdev1", 00:21:08.846 "aliases": [ 00:21:08.846 "d2a3a8f0-5657-4150-98d5-d894fa98a760" 00:21:08.846 ], 00:21:08.846 "product_name": "Malloc disk", 00:21:08.846 "block_size": 512, 00:21:08.846 "num_blocks": 65536, 00:21:08.846 "uuid": "d2a3a8f0-5657-4150-98d5-d894fa98a760", 00:21:08.846 "assigned_rate_limits": { 00:21:08.846 "rw_ios_per_sec": 0, 00:21:08.846 "rw_mbytes_per_sec": 0, 00:21:08.846 "r_mbytes_per_sec": 0, 00:21:08.846 "w_mbytes_per_sec": 0 00:21:08.846 }, 00:21:08.846 "claimed": true, 00:21:08.846 "claim_type": "exclusive_write", 00:21:08.846 "zoned": false, 00:21:08.846 "supported_io_types": { 00:21:08.846 "read": true, 00:21:08.846 "write": true, 00:21:08.846 "unmap": true, 00:21:08.846 "flush": true, 00:21:08.846 "reset": true, 00:21:08.846 "nvme_admin": false, 00:21:08.846 "nvme_io": false, 00:21:08.846 "nvme_io_md": false, 00:21:08.846 "write_zeroes": true, 00:21:08.846 "zcopy": true, 00:21:08.846 "get_zone_info": false, 00:21:08.846 "zone_management": false, 00:21:08.846 "zone_append": false, 00:21:08.846 "compare": false, 00:21:08.846 "compare_and_write": false, 00:21:08.846 "abort": true, 00:21:08.846 "seek_hole": false, 00:21:08.846 "seek_data": false, 00:21:08.846 "copy": true, 00:21:08.846 "nvme_iov_md": false 00:21:08.846 }, 00:21:08.846 "memory_domains": [ 00:21:08.846 { 00:21:08.846 "dma_device_id": "system", 00:21:08.846 "dma_device_type": 1 00:21:08.846 }, 00:21:08.846 { 00:21:08.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.846 "dma_device_type": 2 00:21:08.846 } 00:21:08.846 ], 00:21:08.846 "driver_specific": {} 00:21:08.846 }' 00:21:08.846 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.846 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.846 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:08.847 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:08.847 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.105 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:09.105 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.105 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.105 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:09.105 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.105 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.105 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:09.105 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:09.105 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:09.105 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:09.364 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:09.364 "name": "BaseBdev2", 00:21:09.364 "aliases": [ 00:21:09.364 "d453ae44-9c0e-4995-9586-bfc78dc4117d" 00:21:09.364 ], 00:21:09.364 "product_name": "Malloc disk", 00:21:09.364 "block_size": 512, 00:21:09.364 "num_blocks": 65536, 00:21:09.364 "uuid": "d453ae44-9c0e-4995-9586-bfc78dc4117d", 00:21:09.365 "assigned_rate_limits": { 00:21:09.365 "rw_ios_per_sec": 0, 00:21:09.365 "rw_mbytes_per_sec": 0, 00:21:09.365 "r_mbytes_per_sec": 0, 00:21:09.365 "w_mbytes_per_sec": 0 00:21:09.365 }, 00:21:09.365 "claimed": true, 00:21:09.365 "claim_type": "exclusive_write", 00:21:09.365 "zoned": false, 00:21:09.365 "supported_io_types": { 00:21:09.365 "read": true, 00:21:09.365 "write": true, 00:21:09.365 "unmap": true, 00:21:09.365 "flush": true, 00:21:09.365 "reset": true, 00:21:09.365 "nvme_admin": false, 00:21:09.365 "nvme_io": false, 00:21:09.365 "nvme_io_md": false, 00:21:09.365 "write_zeroes": true, 00:21:09.365 "zcopy": true, 00:21:09.365 "get_zone_info": false, 00:21:09.365 "zone_management": false, 00:21:09.365 "zone_append": false, 00:21:09.365 "compare": false, 00:21:09.365 "compare_and_write": false, 00:21:09.365 "abort": true, 00:21:09.365 "seek_hole": false, 00:21:09.365 "seek_data": false, 00:21:09.365 "copy": true, 00:21:09.365 "nvme_iov_md": false 00:21:09.365 }, 00:21:09.365 "memory_domains": [ 00:21:09.365 { 00:21:09.365 "dma_device_id": "system", 00:21:09.365 "dma_device_type": 1 00:21:09.365 }, 00:21:09.365 { 00:21:09.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.365 "dma_device_type": 2 00:21:09.365 } 00:21:09.365 ], 00:21:09.365 "driver_specific": {} 00:21:09.365 }' 00:21:09.365 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:09.365 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:09.365 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:09.365 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.623 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.623 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:09.623 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.623 13:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.623 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:09.623 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.623 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.624 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:09.624 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:09.624 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:09.624 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:09.882 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:09.882 "name": "BaseBdev3", 00:21:09.882 "aliases": [ 00:21:09.882 "3d6d4a42-a036-4b33-b8a4-b3d4b072ba3f" 00:21:09.882 ], 00:21:09.882 "product_name": "Malloc disk", 00:21:09.882 "block_size": 512, 00:21:09.882 "num_blocks": 65536, 00:21:09.882 "uuid": "3d6d4a42-a036-4b33-b8a4-b3d4b072ba3f", 00:21:09.882 "assigned_rate_limits": { 00:21:09.882 "rw_ios_per_sec": 0, 00:21:09.882 "rw_mbytes_per_sec": 0, 00:21:09.882 "r_mbytes_per_sec": 0, 00:21:09.882 "w_mbytes_per_sec": 0 00:21:09.882 }, 00:21:09.882 "claimed": true, 00:21:09.882 "claim_type": "exclusive_write", 00:21:09.882 "zoned": false, 00:21:09.882 "supported_io_types": { 00:21:09.882 "read": true, 00:21:09.882 "write": true, 00:21:09.882 "unmap": true, 00:21:09.882 "flush": true, 00:21:09.882 "reset": true, 00:21:09.882 "nvme_admin": false, 00:21:09.882 "nvme_io": false, 00:21:09.882 "nvme_io_md": false, 00:21:09.882 "write_zeroes": true, 00:21:09.882 "zcopy": true, 00:21:09.882 "get_zone_info": false, 00:21:09.882 "zone_management": false, 00:21:09.882 "zone_append": false, 00:21:09.882 "compare": false, 00:21:09.883 "compare_and_write": false, 00:21:09.883 "abort": true, 00:21:09.883 "seek_hole": false, 00:21:09.883 "seek_data": false, 00:21:09.883 "copy": true, 00:21:09.883 "nvme_iov_md": false 00:21:09.883 }, 00:21:09.883 "memory_domains": [ 00:21:09.883 { 00:21:09.883 "dma_device_id": "system", 00:21:09.883 "dma_device_type": 1 00:21:09.883 }, 00:21:09.883 { 00:21:09.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.883 "dma_device_type": 2 00:21:09.883 } 00:21:09.883 ], 00:21:09.883 "driver_specific": {} 00:21:09.883 }' 00:21:09.883 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.140 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.140 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:10.140 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.140 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.140 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:10.140 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.140 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.140 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:10.140 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.399 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.399 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:10.399 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:10.399 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:10.399 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:10.657 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:10.657 "name": "BaseBdev4", 00:21:10.657 "aliases": [ 00:21:10.657 "2b7fdacb-d8ed-40b5-aca4-241934eb8f3a" 00:21:10.657 ], 00:21:10.657 "product_name": "Malloc disk", 00:21:10.657 "block_size": 512, 00:21:10.657 "num_blocks": 65536, 00:21:10.657 "uuid": "2b7fdacb-d8ed-40b5-aca4-241934eb8f3a", 00:21:10.657 "assigned_rate_limits": { 00:21:10.657 "rw_ios_per_sec": 0, 00:21:10.657 "rw_mbytes_per_sec": 0, 00:21:10.657 "r_mbytes_per_sec": 0, 00:21:10.657 "w_mbytes_per_sec": 0 00:21:10.657 }, 00:21:10.657 "claimed": true, 00:21:10.657 "claim_type": "exclusive_write", 00:21:10.657 "zoned": false, 00:21:10.657 "supported_io_types": { 00:21:10.657 "read": true, 00:21:10.657 "write": true, 00:21:10.657 "unmap": true, 00:21:10.657 "flush": true, 00:21:10.657 "reset": true, 00:21:10.657 "nvme_admin": false, 00:21:10.657 "nvme_io": false, 00:21:10.657 "nvme_io_md": false, 00:21:10.657 "write_zeroes": true, 00:21:10.657 "zcopy": true, 00:21:10.657 "get_zone_info": false, 00:21:10.657 "zone_management": false, 00:21:10.657 "zone_append": false, 00:21:10.657 "compare": false, 00:21:10.657 "compare_and_write": false, 00:21:10.657 "abort": true, 00:21:10.657 "seek_hole": false, 00:21:10.657 "seek_data": false, 00:21:10.657 "copy": true, 00:21:10.657 "nvme_iov_md": false 00:21:10.657 }, 00:21:10.657 "memory_domains": [ 00:21:10.657 { 00:21:10.657 "dma_device_id": "system", 00:21:10.657 "dma_device_type": 1 00:21:10.657 }, 00:21:10.657 { 00:21:10.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.657 "dma_device_type": 2 00:21:10.657 } 00:21:10.657 ], 00:21:10.657 "driver_specific": {} 00:21:10.657 }' 00:21:10.657 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.657 13:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.657 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:10.657 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.657 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.657 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:10.657 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.657 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.915 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:10.915 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.915 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.915 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:10.915 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:11.174 [2024-07-25 13:21:21.479534] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.174 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.433 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:11.433 "name": "Existed_Raid", 00:21:11.433 "uuid": "ee0082f4-aece-4d0a-9494-fd2b76eef6f5", 00:21:11.433 "strip_size_kb": 0, 00:21:11.433 "state": "online", 00:21:11.433 "raid_level": "raid1", 00:21:11.433 "superblock": false, 00:21:11.433 "num_base_bdevs": 4, 00:21:11.433 "num_base_bdevs_discovered": 3, 00:21:11.433 "num_base_bdevs_operational": 3, 00:21:11.433 "base_bdevs_list": [ 00:21:11.433 { 00:21:11.433 "name": null, 00:21:11.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.433 "is_configured": false, 00:21:11.433 "data_offset": 0, 00:21:11.433 "data_size": 65536 00:21:11.433 }, 00:21:11.433 { 00:21:11.433 "name": "BaseBdev2", 00:21:11.433 "uuid": "d453ae44-9c0e-4995-9586-bfc78dc4117d", 00:21:11.433 "is_configured": true, 00:21:11.433 "data_offset": 0, 00:21:11.433 "data_size": 65536 00:21:11.433 }, 00:21:11.433 { 00:21:11.433 "name": "BaseBdev3", 00:21:11.433 "uuid": "3d6d4a42-a036-4b33-b8a4-b3d4b072ba3f", 00:21:11.433 "is_configured": true, 00:21:11.433 "data_offset": 0, 00:21:11.433 "data_size": 65536 00:21:11.433 }, 00:21:11.433 { 00:21:11.433 "name": "BaseBdev4", 00:21:11.433 "uuid": "2b7fdacb-d8ed-40b5-aca4-241934eb8f3a", 00:21:11.433 "is_configured": true, 00:21:11.433 "data_offset": 0, 00:21:11.433 "data_size": 65536 00:21:11.433 } 00:21:11.433 ] 00:21:11.433 }' 00:21:11.433 13:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:11.433 13:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.001 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:12.001 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:12.001 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.001 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:12.001 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:12.260 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:12.260 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:12.260 [2024-07-25 13:21:22.699776] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:12.260 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:12.260 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:12.260 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.260 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:12.519 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:12.519 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:12.519 13:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:12.778 [2024-07-25 13:21:23.166894] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:12.778 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:12.778 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:12.778 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.778 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:13.037 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:13.037 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:13.037 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:13.295 [2024-07-25 13:21:23.630216] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:13.295 [2024-07-25 13:21:23.630281] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:13.295 [2024-07-25 13:21:23.640697] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:13.295 [2024-07-25 13:21:23.640724] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:13.295 [2024-07-25 13:21:23.640734] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ae2840 name Existed_Raid, state offline 00:21:13.295 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:13.295 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:13.295 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.295 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:13.552 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:13.552 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:13.552 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:13.552 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:13.552 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:13.552 13:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:13.851 BaseBdev2 00:21:13.851 13:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:13.851 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:13.851 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:13.851 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:13.851 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:13.851 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:13.851 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:14.129 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:14.129 [ 00:21:14.129 { 00:21:14.129 "name": "BaseBdev2", 00:21:14.129 "aliases": [ 00:21:14.129 "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5" 00:21:14.129 ], 00:21:14.129 "product_name": "Malloc disk", 00:21:14.129 "block_size": 512, 00:21:14.129 "num_blocks": 65536, 00:21:14.129 "uuid": "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5", 00:21:14.129 "assigned_rate_limits": { 00:21:14.129 "rw_ios_per_sec": 0, 00:21:14.129 "rw_mbytes_per_sec": 0, 00:21:14.129 "r_mbytes_per_sec": 0, 00:21:14.129 "w_mbytes_per_sec": 0 00:21:14.129 }, 00:21:14.129 "claimed": false, 00:21:14.129 "zoned": false, 00:21:14.129 "supported_io_types": { 00:21:14.129 "read": true, 00:21:14.129 "write": true, 00:21:14.129 "unmap": true, 00:21:14.129 "flush": true, 00:21:14.129 "reset": true, 00:21:14.129 "nvme_admin": false, 00:21:14.129 "nvme_io": false, 00:21:14.130 "nvme_io_md": false, 00:21:14.130 "write_zeroes": true, 00:21:14.130 "zcopy": true, 00:21:14.130 "get_zone_info": false, 00:21:14.130 "zone_management": false, 00:21:14.130 "zone_append": false, 00:21:14.130 "compare": false, 00:21:14.130 "compare_and_write": false, 00:21:14.130 "abort": true, 00:21:14.130 "seek_hole": false, 00:21:14.130 "seek_data": false, 00:21:14.130 "copy": true, 00:21:14.130 "nvme_iov_md": false 00:21:14.130 }, 00:21:14.130 "memory_domains": [ 00:21:14.130 { 00:21:14.130 "dma_device_id": "system", 00:21:14.130 "dma_device_type": 1 00:21:14.130 }, 00:21:14.130 { 00:21:14.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.130 "dma_device_type": 2 00:21:14.130 } 00:21:14.130 ], 00:21:14.130 "driver_specific": {} 00:21:14.130 } 00:21:14.130 ] 00:21:14.130 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:14.130 13:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:14.130 13:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:14.130 13:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:14.389 BaseBdev3 00:21:14.389 13:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:14.389 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:14.389 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:14.389 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:14.389 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:14.389 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:14.389 13:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:14.648 13:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:14.908 [ 00:21:14.908 { 00:21:14.908 "name": "BaseBdev3", 00:21:14.908 "aliases": [ 00:21:14.908 "58f20572-5145-494e-b6ca-6f84049eaf99" 00:21:14.908 ], 00:21:14.908 "product_name": "Malloc disk", 00:21:14.908 "block_size": 512, 00:21:14.908 "num_blocks": 65536, 00:21:14.908 "uuid": "58f20572-5145-494e-b6ca-6f84049eaf99", 00:21:14.908 "assigned_rate_limits": { 00:21:14.908 "rw_ios_per_sec": 0, 00:21:14.908 "rw_mbytes_per_sec": 0, 00:21:14.908 "r_mbytes_per_sec": 0, 00:21:14.908 "w_mbytes_per_sec": 0 00:21:14.908 }, 00:21:14.908 "claimed": false, 00:21:14.908 "zoned": false, 00:21:14.908 "supported_io_types": { 00:21:14.908 "read": true, 00:21:14.908 "write": true, 00:21:14.908 "unmap": true, 00:21:14.908 "flush": true, 00:21:14.908 "reset": true, 00:21:14.908 "nvme_admin": false, 00:21:14.908 "nvme_io": false, 00:21:14.908 "nvme_io_md": false, 00:21:14.908 "write_zeroes": true, 00:21:14.908 "zcopy": true, 00:21:14.908 "get_zone_info": false, 00:21:14.908 "zone_management": false, 00:21:14.908 "zone_append": false, 00:21:14.908 "compare": false, 00:21:14.908 "compare_and_write": false, 00:21:14.908 "abort": true, 00:21:14.908 "seek_hole": false, 00:21:14.908 "seek_data": false, 00:21:14.908 "copy": true, 00:21:14.908 "nvme_iov_md": false 00:21:14.908 }, 00:21:14.908 "memory_domains": [ 00:21:14.908 { 00:21:14.908 "dma_device_id": "system", 00:21:14.908 "dma_device_type": 1 00:21:14.908 }, 00:21:14.908 { 00:21:14.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.908 "dma_device_type": 2 00:21:14.908 } 00:21:14.908 ], 00:21:14.908 "driver_specific": {} 00:21:14.908 } 00:21:14.908 ] 00:21:14.908 13:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:14.908 13:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:14.908 13:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:14.909 13:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:15.168 BaseBdev4 00:21:15.168 13:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:15.168 13:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:15.168 13:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:15.168 13:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:15.168 13:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:15.168 13:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:15.168 13:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:15.426 13:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:15.426 [ 00:21:15.426 { 00:21:15.426 "name": "BaseBdev4", 00:21:15.426 "aliases": [ 00:21:15.426 "10cab122-9730-45b2-8597-4ccd52e20b58" 00:21:15.426 ], 00:21:15.427 "product_name": "Malloc disk", 00:21:15.427 "block_size": 512, 00:21:15.427 "num_blocks": 65536, 00:21:15.427 "uuid": "10cab122-9730-45b2-8597-4ccd52e20b58", 00:21:15.427 "assigned_rate_limits": { 00:21:15.427 "rw_ios_per_sec": 0, 00:21:15.427 "rw_mbytes_per_sec": 0, 00:21:15.427 "r_mbytes_per_sec": 0, 00:21:15.427 "w_mbytes_per_sec": 0 00:21:15.427 }, 00:21:15.427 "claimed": false, 00:21:15.427 "zoned": false, 00:21:15.427 "supported_io_types": { 00:21:15.427 "read": true, 00:21:15.427 "write": true, 00:21:15.427 "unmap": true, 00:21:15.427 "flush": true, 00:21:15.427 "reset": true, 00:21:15.427 "nvme_admin": false, 00:21:15.427 "nvme_io": false, 00:21:15.427 "nvme_io_md": false, 00:21:15.427 "write_zeroes": true, 00:21:15.427 "zcopy": true, 00:21:15.427 "get_zone_info": false, 00:21:15.427 "zone_management": false, 00:21:15.427 "zone_append": false, 00:21:15.427 "compare": false, 00:21:15.427 "compare_and_write": false, 00:21:15.427 "abort": true, 00:21:15.427 "seek_hole": false, 00:21:15.427 "seek_data": false, 00:21:15.427 "copy": true, 00:21:15.427 "nvme_iov_md": false 00:21:15.427 }, 00:21:15.427 "memory_domains": [ 00:21:15.427 { 00:21:15.427 "dma_device_id": "system", 00:21:15.427 "dma_device_type": 1 00:21:15.427 }, 00:21:15.427 { 00:21:15.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.427 "dma_device_type": 2 00:21:15.427 } 00:21:15.427 ], 00:21:15.427 "driver_specific": {} 00:21:15.427 } 00:21:15.427 ] 00:21:15.427 13:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:15.427 13:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:15.427 13:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:15.427 13:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:15.686 [2024-07-25 13:21:26.103915] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:15.686 [2024-07-25 13:21:26.103953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:15.686 [2024-07-25 13:21:26.103971] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:15.686 [2024-07-25 13:21:26.105193] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:15.686 [2024-07-25 13:21:26.105233] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:15.686 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:15.686 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:15.686 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:15.686 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:15.686 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:15.686 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:15.686 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:15.686 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:15.686 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:15.686 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:15.686 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.686 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.946 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:15.946 "name": "Existed_Raid", 00:21:15.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.946 "strip_size_kb": 0, 00:21:15.946 "state": "configuring", 00:21:15.946 "raid_level": "raid1", 00:21:15.946 "superblock": false, 00:21:15.946 "num_base_bdevs": 4, 00:21:15.946 "num_base_bdevs_discovered": 3, 00:21:15.946 "num_base_bdevs_operational": 4, 00:21:15.946 "base_bdevs_list": [ 00:21:15.946 { 00:21:15.946 "name": "BaseBdev1", 00:21:15.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.946 "is_configured": false, 00:21:15.946 "data_offset": 0, 00:21:15.946 "data_size": 0 00:21:15.946 }, 00:21:15.946 { 00:21:15.946 "name": "BaseBdev2", 00:21:15.946 "uuid": "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5", 00:21:15.946 "is_configured": true, 00:21:15.946 "data_offset": 0, 00:21:15.946 "data_size": 65536 00:21:15.946 }, 00:21:15.946 { 00:21:15.946 "name": "BaseBdev3", 00:21:15.946 "uuid": "58f20572-5145-494e-b6ca-6f84049eaf99", 00:21:15.946 "is_configured": true, 00:21:15.946 "data_offset": 0, 00:21:15.946 "data_size": 65536 00:21:15.946 }, 00:21:15.946 { 00:21:15.946 "name": "BaseBdev4", 00:21:15.946 "uuid": "10cab122-9730-45b2-8597-4ccd52e20b58", 00:21:15.946 "is_configured": true, 00:21:15.946 "data_offset": 0, 00:21:15.946 "data_size": 65536 00:21:15.946 } 00:21:15.946 ] 00:21:15.946 }' 00:21:15.946 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:15.946 13:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.515 13:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:16.774 [2024-07-25 13:21:27.150665] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:16.774 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:16.774 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:16.774 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:16.774 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:16.774 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:16.774 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:16.774 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:16.774 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:16.774 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:16.774 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:16.774 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.774 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.034 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.034 "name": "Existed_Raid", 00:21:17.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.034 "strip_size_kb": 0, 00:21:17.034 "state": "configuring", 00:21:17.034 "raid_level": "raid1", 00:21:17.034 "superblock": false, 00:21:17.034 "num_base_bdevs": 4, 00:21:17.034 "num_base_bdevs_discovered": 2, 00:21:17.034 "num_base_bdevs_operational": 4, 00:21:17.034 "base_bdevs_list": [ 00:21:17.034 { 00:21:17.034 "name": "BaseBdev1", 00:21:17.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.034 "is_configured": false, 00:21:17.034 "data_offset": 0, 00:21:17.034 "data_size": 0 00:21:17.034 }, 00:21:17.034 { 00:21:17.034 "name": null, 00:21:17.034 "uuid": "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5", 00:21:17.034 "is_configured": false, 00:21:17.034 "data_offset": 0, 00:21:17.034 "data_size": 65536 00:21:17.034 }, 00:21:17.034 { 00:21:17.034 "name": "BaseBdev3", 00:21:17.034 "uuid": "58f20572-5145-494e-b6ca-6f84049eaf99", 00:21:17.034 "is_configured": true, 00:21:17.034 "data_offset": 0, 00:21:17.034 "data_size": 65536 00:21:17.034 }, 00:21:17.034 { 00:21:17.034 "name": "BaseBdev4", 00:21:17.034 "uuid": "10cab122-9730-45b2-8597-4ccd52e20b58", 00:21:17.034 "is_configured": true, 00:21:17.034 "data_offset": 0, 00:21:17.034 "data_size": 65536 00:21:17.034 } 00:21:17.034 ] 00:21:17.034 }' 00:21:17.034 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.034 13:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.603 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.603 13:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:17.862 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:17.862 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:18.126 [2024-07-25 13:21:28.364923] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:18.126 BaseBdev1 00:21:18.126 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:18.126 13:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:18.126 13:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:18.126 13:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:18.126 13:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:18.126 13:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:18.126 13:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:18.126 13:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:18.388 [ 00:21:18.388 { 00:21:18.388 "name": "BaseBdev1", 00:21:18.388 "aliases": [ 00:21:18.388 "59ed58a9-d446-4376-9fdf-4e1f471eeacf" 00:21:18.388 ], 00:21:18.388 "product_name": "Malloc disk", 00:21:18.388 "block_size": 512, 00:21:18.388 "num_blocks": 65536, 00:21:18.388 "uuid": "59ed58a9-d446-4376-9fdf-4e1f471eeacf", 00:21:18.388 "assigned_rate_limits": { 00:21:18.388 "rw_ios_per_sec": 0, 00:21:18.388 "rw_mbytes_per_sec": 0, 00:21:18.388 "r_mbytes_per_sec": 0, 00:21:18.388 "w_mbytes_per_sec": 0 00:21:18.388 }, 00:21:18.388 "claimed": true, 00:21:18.388 "claim_type": "exclusive_write", 00:21:18.388 "zoned": false, 00:21:18.388 "supported_io_types": { 00:21:18.388 "read": true, 00:21:18.388 "write": true, 00:21:18.388 "unmap": true, 00:21:18.388 "flush": true, 00:21:18.388 "reset": true, 00:21:18.388 "nvme_admin": false, 00:21:18.388 "nvme_io": false, 00:21:18.388 "nvme_io_md": false, 00:21:18.388 "write_zeroes": true, 00:21:18.388 "zcopy": true, 00:21:18.388 "get_zone_info": false, 00:21:18.388 "zone_management": false, 00:21:18.388 "zone_append": false, 00:21:18.388 "compare": false, 00:21:18.388 "compare_and_write": false, 00:21:18.388 "abort": true, 00:21:18.388 "seek_hole": false, 00:21:18.388 "seek_data": false, 00:21:18.388 "copy": true, 00:21:18.388 "nvme_iov_md": false 00:21:18.388 }, 00:21:18.388 "memory_domains": [ 00:21:18.388 { 00:21:18.388 "dma_device_id": "system", 00:21:18.388 "dma_device_type": 1 00:21:18.388 }, 00:21:18.388 { 00:21:18.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.388 "dma_device_type": 2 00:21:18.388 } 00:21:18.388 ], 00:21:18.388 "driver_specific": {} 00:21:18.388 } 00:21:18.388 ] 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.388 13:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.647 13:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.647 "name": "Existed_Raid", 00:21:18.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.647 "strip_size_kb": 0, 00:21:18.647 "state": "configuring", 00:21:18.647 "raid_level": "raid1", 00:21:18.647 "superblock": false, 00:21:18.647 "num_base_bdevs": 4, 00:21:18.647 "num_base_bdevs_discovered": 3, 00:21:18.647 "num_base_bdevs_operational": 4, 00:21:18.647 "base_bdevs_list": [ 00:21:18.647 { 00:21:18.647 "name": "BaseBdev1", 00:21:18.647 "uuid": "59ed58a9-d446-4376-9fdf-4e1f471eeacf", 00:21:18.647 "is_configured": true, 00:21:18.647 "data_offset": 0, 00:21:18.647 "data_size": 65536 00:21:18.647 }, 00:21:18.647 { 00:21:18.647 "name": null, 00:21:18.647 "uuid": "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5", 00:21:18.647 "is_configured": false, 00:21:18.647 "data_offset": 0, 00:21:18.647 "data_size": 65536 00:21:18.647 }, 00:21:18.647 { 00:21:18.647 "name": "BaseBdev3", 00:21:18.647 "uuid": "58f20572-5145-494e-b6ca-6f84049eaf99", 00:21:18.647 "is_configured": true, 00:21:18.647 "data_offset": 0, 00:21:18.647 "data_size": 65536 00:21:18.647 }, 00:21:18.647 { 00:21:18.647 "name": "BaseBdev4", 00:21:18.647 "uuid": "10cab122-9730-45b2-8597-4ccd52e20b58", 00:21:18.647 "is_configured": true, 00:21:18.647 "data_offset": 0, 00:21:18.647 "data_size": 65536 00:21:18.647 } 00:21:18.647 ] 00:21:18.647 }' 00:21:18.647 13:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.647 13:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.215 13:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:19.215 13:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.474 13:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:19.474 13:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:19.733 [2024-07-25 13:21:30.073473] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:19.733 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:19.733 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:19.734 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:19.734 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:19.734 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:19.734 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:19.734 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.734 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.734 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.734 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.734 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.734 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.993 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.993 "name": "Existed_Raid", 00:21:19.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.993 "strip_size_kb": 0, 00:21:19.993 "state": "configuring", 00:21:19.993 "raid_level": "raid1", 00:21:19.993 "superblock": false, 00:21:19.993 "num_base_bdevs": 4, 00:21:19.993 "num_base_bdevs_discovered": 2, 00:21:19.993 "num_base_bdevs_operational": 4, 00:21:19.993 "base_bdevs_list": [ 00:21:19.993 { 00:21:19.993 "name": "BaseBdev1", 00:21:19.993 "uuid": "59ed58a9-d446-4376-9fdf-4e1f471eeacf", 00:21:19.993 "is_configured": true, 00:21:19.993 "data_offset": 0, 00:21:19.993 "data_size": 65536 00:21:19.993 }, 00:21:19.993 { 00:21:19.993 "name": null, 00:21:19.993 "uuid": "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5", 00:21:19.993 "is_configured": false, 00:21:19.993 "data_offset": 0, 00:21:19.993 "data_size": 65536 00:21:19.993 }, 00:21:19.993 { 00:21:19.993 "name": null, 00:21:19.993 "uuid": "58f20572-5145-494e-b6ca-6f84049eaf99", 00:21:19.993 "is_configured": false, 00:21:19.993 "data_offset": 0, 00:21:19.993 "data_size": 65536 00:21:19.993 }, 00:21:19.993 { 00:21:19.993 "name": "BaseBdev4", 00:21:19.993 "uuid": "10cab122-9730-45b2-8597-4ccd52e20b58", 00:21:19.993 "is_configured": true, 00:21:19.993 "data_offset": 0, 00:21:19.993 "data_size": 65536 00:21:19.993 } 00:21:19.993 ] 00:21:19.993 }' 00:21:19.993 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.993 13:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.563 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.563 13:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:20.821 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:20.821 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:20.821 [2024-07-25 13:21:31.288709] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:20.821 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:20.821 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:20.821 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:20.821 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:20.821 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:20.821 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:20.821 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.821 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.821 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.821 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.080 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.080 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.080 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.080 "name": "Existed_Raid", 00:21:21.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.080 "strip_size_kb": 0, 00:21:21.080 "state": "configuring", 00:21:21.080 "raid_level": "raid1", 00:21:21.080 "superblock": false, 00:21:21.080 "num_base_bdevs": 4, 00:21:21.080 "num_base_bdevs_discovered": 3, 00:21:21.080 "num_base_bdevs_operational": 4, 00:21:21.080 "base_bdevs_list": [ 00:21:21.080 { 00:21:21.080 "name": "BaseBdev1", 00:21:21.080 "uuid": "59ed58a9-d446-4376-9fdf-4e1f471eeacf", 00:21:21.080 "is_configured": true, 00:21:21.080 "data_offset": 0, 00:21:21.080 "data_size": 65536 00:21:21.080 }, 00:21:21.080 { 00:21:21.080 "name": null, 00:21:21.080 "uuid": "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5", 00:21:21.080 "is_configured": false, 00:21:21.080 "data_offset": 0, 00:21:21.080 "data_size": 65536 00:21:21.080 }, 00:21:21.080 { 00:21:21.080 "name": "BaseBdev3", 00:21:21.080 "uuid": "58f20572-5145-494e-b6ca-6f84049eaf99", 00:21:21.080 "is_configured": true, 00:21:21.080 "data_offset": 0, 00:21:21.080 "data_size": 65536 00:21:21.080 }, 00:21:21.080 { 00:21:21.080 "name": "BaseBdev4", 00:21:21.080 "uuid": "10cab122-9730-45b2-8597-4ccd52e20b58", 00:21:21.080 "is_configured": true, 00:21:21.080 "data_offset": 0, 00:21:21.080 "data_size": 65536 00:21:21.080 } 00:21:21.080 ] 00:21:21.080 }' 00:21:21.080 13:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.080 13:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.648 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.648 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:21.907 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:21.907 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:22.166 [2024-07-25 13:21:32.540040] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:22.166 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:22.166 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:22.166 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:22.166 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:22.166 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:22.166 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:22.166 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.166 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.166 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.166 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.166 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.166 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.425 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.425 "name": "Existed_Raid", 00:21:22.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.425 "strip_size_kb": 0, 00:21:22.425 "state": "configuring", 00:21:22.425 "raid_level": "raid1", 00:21:22.425 "superblock": false, 00:21:22.425 "num_base_bdevs": 4, 00:21:22.425 "num_base_bdevs_discovered": 2, 00:21:22.425 "num_base_bdevs_operational": 4, 00:21:22.425 "base_bdevs_list": [ 00:21:22.425 { 00:21:22.425 "name": null, 00:21:22.425 "uuid": "59ed58a9-d446-4376-9fdf-4e1f471eeacf", 00:21:22.425 "is_configured": false, 00:21:22.425 "data_offset": 0, 00:21:22.425 "data_size": 65536 00:21:22.425 }, 00:21:22.425 { 00:21:22.425 "name": null, 00:21:22.425 "uuid": "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5", 00:21:22.425 "is_configured": false, 00:21:22.425 "data_offset": 0, 00:21:22.425 "data_size": 65536 00:21:22.425 }, 00:21:22.425 { 00:21:22.425 "name": "BaseBdev3", 00:21:22.425 "uuid": "58f20572-5145-494e-b6ca-6f84049eaf99", 00:21:22.425 "is_configured": true, 00:21:22.425 "data_offset": 0, 00:21:22.425 "data_size": 65536 00:21:22.425 }, 00:21:22.425 { 00:21:22.425 "name": "BaseBdev4", 00:21:22.425 "uuid": "10cab122-9730-45b2-8597-4ccd52e20b58", 00:21:22.425 "is_configured": true, 00:21:22.425 "data_offset": 0, 00:21:22.425 "data_size": 65536 00:21:22.425 } 00:21:22.425 ] 00:21:22.425 }' 00:21:22.425 13:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.425 13:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.993 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.993 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:23.251 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:23.252 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:23.510 [2024-07-25 13:21:33.797511] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:23.510 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:23.510 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:23.510 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:23.510 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:23.510 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:23.510 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:23.510 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.510 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.510 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.510 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.510 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.510 13:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.769 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.769 "name": "Existed_Raid", 00:21:23.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.769 "strip_size_kb": 0, 00:21:23.769 "state": "configuring", 00:21:23.769 "raid_level": "raid1", 00:21:23.769 "superblock": false, 00:21:23.769 "num_base_bdevs": 4, 00:21:23.769 "num_base_bdevs_discovered": 3, 00:21:23.769 "num_base_bdevs_operational": 4, 00:21:23.769 "base_bdevs_list": [ 00:21:23.769 { 00:21:23.769 "name": null, 00:21:23.769 "uuid": "59ed58a9-d446-4376-9fdf-4e1f471eeacf", 00:21:23.769 "is_configured": false, 00:21:23.769 "data_offset": 0, 00:21:23.769 "data_size": 65536 00:21:23.769 }, 00:21:23.769 { 00:21:23.769 "name": "BaseBdev2", 00:21:23.769 "uuid": "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5", 00:21:23.769 "is_configured": true, 00:21:23.769 "data_offset": 0, 00:21:23.769 "data_size": 65536 00:21:23.769 }, 00:21:23.769 { 00:21:23.769 "name": "BaseBdev3", 00:21:23.769 "uuid": "58f20572-5145-494e-b6ca-6f84049eaf99", 00:21:23.769 "is_configured": true, 00:21:23.769 "data_offset": 0, 00:21:23.769 "data_size": 65536 00:21:23.769 }, 00:21:23.769 { 00:21:23.769 "name": "BaseBdev4", 00:21:23.769 "uuid": "10cab122-9730-45b2-8597-4ccd52e20b58", 00:21:23.769 "is_configured": true, 00:21:23.769 "data_offset": 0, 00:21:23.769 "data_size": 65536 00:21:23.769 } 00:21:23.769 ] 00:21:23.769 }' 00:21:23.769 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.769 13:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.337 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.337 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:24.595 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:24.595 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.595 13:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:24.854 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 59ed58a9-d446-4376-9fdf-4e1f471eeacf 00:21:25.112 [2024-07-25 13:21:35.344750] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:25.112 [2024-07-25 13:21:35.344784] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ae1360 00:21:25.112 [2024-07-25 13:21:35.344791] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:25.112 [2024-07-25 13:21:35.344965] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ae1f20 00:21:25.113 [2024-07-25 13:21:35.345078] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ae1360 00:21:25.113 [2024-07-25 13:21:35.345087] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1ae1360 00:21:25.113 [2024-07-25 13:21:35.345241] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.113 NewBaseBdev 00:21:25.113 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:25.113 13:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:21:25.113 13:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:25.113 13:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:25.113 13:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:25.113 13:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:25.113 13:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:25.113 13:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:25.371 [ 00:21:25.371 { 00:21:25.371 "name": "NewBaseBdev", 00:21:25.372 "aliases": [ 00:21:25.372 "59ed58a9-d446-4376-9fdf-4e1f471eeacf" 00:21:25.372 ], 00:21:25.372 "product_name": "Malloc disk", 00:21:25.372 "block_size": 512, 00:21:25.372 "num_blocks": 65536, 00:21:25.372 "uuid": "59ed58a9-d446-4376-9fdf-4e1f471eeacf", 00:21:25.372 "assigned_rate_limits": { 00:21:25.372 "rw_ios_per_sec": 0, 00:21:25.372 "rw_mbytes_per_sec": 0, 00:21:25.372 "r_mbytes_per_sec": 0, 00:21:25.372 "w_mbytes_per_sec": 0 00:21:25.372 }, 00:21:25.372 "claimed": true, 00:21:25.372 "claim_type": "exclusive_write", 00:21:25.372 "zoned": false, 00:21:25.372 "supported_io_types": { 00:21:25.372 "read": true, 00:21:25.372 "write": true, 00:21:25.372 "unmap": true, 00:21:25.372 "flush": true, 00:21:25.372 "reset": true, 00:21:25.372 "nvme_admin": false, 00:21:25.372 "nvme_io": false, 00:21:25.372 "nvme_io_md": false, 00:21:25.372 "write_zeroes": true, 00:21:25.372 "zcopy": true, 00:21:25.372 "get_zone_info": false, 00:21:25.372 "zone_management": false, 00:21:25.372 "zone_append": false, 00:21:25.372 "compare": false, 00:21:25.372 "compare_and_write": false, 00:21:25.372 "abort": true, 00:21:25.372 "seek_hole": false, 00:21:25.372 "seek_data": false, 00:21:25.372 "copy": true, 00:21:25.372 "nvme_iov_md": false 00:21:25.372 }, 00:21:25.372 "memory_domains": [ 00:21:25.372 { 00:21:25.372 "dma_device_id": "system", 00:21:25.372 "dma_device_type": 1 00:21:25.372 }, 00:21:25.372 { 00:21:25.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.372 "dma_device_type": 2 00:21:25.372 } 00:21:25.372 ], 00:21:25.372 "driver_specific": {} 00:21:25.372 } 00:21:25.372 ] 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.372 13:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.631 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.631 "name": "Existed_Raid", 00:21:25.631 "uuid": "03059512-da59-4934-8b1e-ad901a0b4654", 00:21:25.631 "strip_size_kb": 0, 00:21:25.631 "state": "online", 00:21:25.631 "raid_level": "raid1", 00:21:25.631 "superblock": false, 00:21:25.631 "num_base_bdevs": 4, 00:21:25.631 "num_base_bdevs_discovered": 4, 00:21:25.631 "num_base_bdevs_operational": 4, 00:21:25.631 "base_bdevs_list": [ 00:21:25.631 { 00:21:25.631 "name": "NewBaseBdev", 00:21:25.631 "uuid": "59ed58a9-d446-4376-9fdf-4e1f471eeacf", 00:21:25.631 "is_configured": true, 00:21:25.631 "data_offset": 0, 00:21:25.631 "data_size": 65536 00:21:25.631 }, 00:21:25.631 { 00:21:25.631 "name": "BaseBdev2", 00:21:25.631 "uuid": "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5", 00:21:25.631 "is_configured": true, 00:21:25.631 "data_offset": 0, 00:21:25.631 "data_size": 65536 00:21:25.631 }, 00:21:25.631 { 00:21:25.631 "name": "BaseBdev3", 00:21:25.631 "uuid": "58f20572-5145-494e-b6ca-6f84049eaf99", 00:21:25.631 "is_configured": true, 00:21:25.631 "data_offset": 0, 00:21:25.631 "data_size": 65536 00:21:25.631 }, 00:21:25.631 { 00:21:25.631 "name": "BaseBdev4", 00:21:25.631 "uuid": "10cab122-9730-45b2-8597-4ccd52e20b58", 00:21:25.631 "is_configured": true, 00:21:25.631 "data_offset": 0, 00:21:25.631 "data_size": 65536 00:21:25.631 } 00:21:25.631 ] 00:21:25.631 }' 00:21:25.631 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.631 13:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.199 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:26.199 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:26.199 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:26.199 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:26.199 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:26.199 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:26.199 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:26.199 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:26.459 [2024-07-25 13:21:36.857048] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:26.459 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:26.459 "name": "Existed_Raid", 00:21:26.459 "aliases": [ 00:21:26.459 "03059512-da59-4934-8b1e-ad901a0b4654" 00:21:26.459 ], 00:21:26.459 "product_name": "Raid Volume", 00:21:26.459 "block_size": 512, 00:21:26.459 "num_blocks": 65536, 00:21:26.459 "uuid": "03059512-da59-4934-8b1e-ad901a0b4654", 00:21:26.459 "assigned_rate_limits": { 00:21:26.459 "rw_ios_per_sec": 0, 00:21:26.459 "rw_mbytes_per_sec": 0, 00:21:26.459 "r_mbytes_per_sec": 0, 00:21:26.459 "w_mbytes_per_sec": 0 00:21:26.459 }, 00:21:26.459 "claimed": false, 00:21:26.459 "zoned": false, 00:21:26.459 "supported_io_types": { 00:21:26.459 "read": true, 00:21:26.459 "write": true, 00:21:26.459 "unmap": false, 00:21:26.459 "flush": false, 00:21:26.459 "reset": true, 00:21:26.459 "nvme_admin": false, 00:21:26.459 "nvme_io": false, 00:21:26.459 "nvme_io_md": false, 00:21:26.459 "write_zeroes": true, 00:21:26.459 "zcopy": false, 00:21:26.459 "get_zone_info": false, 00:21:26.459 "zone_management": false, 00:21:26.459 "zone_append": false, 00:21:26.459 "compare": false, 00:21:26.459 "compare_and_write": false, 00:21:26.459 "abort": false, 00:21:26.459 "seek_hole": false, 00:21:26.459 "seek_data": false, 00:21:26.459 "copy": false, 00:21:26.459 "nvme_iov_md": false 00:21:26.459 }, 00:21:26.459 "memory_domains": [ 00:21:26.459 { 00:21:26.459 "dma_device_id": "system", 00:21:26.459 "dma_device_type": 1 00:21:26.459 }, 00:21:26.459 { 00:21:26.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.459 "dma_device_type": 2 00:21:26.459 }, 00:21:26.459 { 00:21:26.459 "dma_device_id": "system", 00:21:26.459 "dma_device_type": 1 00:21:26.459 }, 00:21:26.459 { 00:21:26.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.459 "dma_device_type": 2 00:21:26.459 }, 00:21:26.459 { 00:21:26.459 "dma_device_id": "system", 00:21:26.459 "dma_device_type": 1 00:21:26.459 }, 00:21:26.459 { 00:21:26.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.459 "dma_device_type": 2 00:21:26.459 }, 00:21:26.459 { 00:21:26.459 "dma_device_id": "system", 00:21:26.459 "dma_device_type": 1 00:21:26.459 }, 00:21:26.459 { 00:21:26.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.459 "dma_device_type": 2 00:21:26.459 } 00:21:26.459 ], 00:21:26.459 "driver_specific": { 00:21:26.459 "raid": { 00:21:26.459 "uuid": "03059512-da59-4934-8b1e-ad901a0b4654", 00:21:26.459 "strip_size_kb": 0, 00:21:26.459 "state": "online", 00:21:26.459 "raid_level": "raid1", 00:21:26.459 "superblock": false, 00:21:26.459 "num_base_bdevs": 4, 00:21:26.459 "num_base_bdevs_discovered": 4, 00:21:26.459 "num_base_bdevs_operational": 4, 00:21:26.459 "base_bdevs_list": [ 00:21:26.459 { 00:21:26.459 "name": "NewBaseBdev", 00:21:26.459 "uuid": "59ed58a9-d446-4376-9fdf-4e1f471eeacf", 00:21:26.459 "is_configured": true, 00:21:26.459 "data_offset": 0, 00:21:26.459 "data_size": 65536 00:21:26.459 }, 00:21:26.459 { 00:21:26.459 "name": "BaseBdev2", 00:21:26.459 "uuid": "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5", 00:21:26.459 "is_configured": true, 00:21:26.459 "data_offset": 0, 00:21:26.459 "data_size": 65536 00:21:26.459 }, 00:21:26.459 { 00:21:26.459 "name": "BaseBdev3", 00:21:26.459 "uuid": "58f20572-5145-494e-b6ca-6f84049eaf99", 00:21:26.459 "is_configured": true, 00:21:26.459 "data_offset": 0, 00:21:26.459 "data_size": 65536 00:21:26.459 }, 00:21:26.459 { 00:21:26.459 "name": "BaseBdev4", 00:21:26.459 "uuid": "10cab122-9730-45b2-8597-4ccd52e20b58", 00:21:26.459 "is_configured": true, 00:21:26.459 "data_offset": 0, 00:21:26.459 "data_size": 65536 00:21:26.459 } 00:21:26.459 ] 00:21:26.459 } 00:21:26.459 } 00:21:26.459 }' 00:21:26.459 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:26.762 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:26.762 BaseBdev2 00:21:26.762 BaseBdev3 00:21:26.762 BaseBdev4' 00:21:26.762 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:26.762 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:26.762 13:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:26.762 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:26.762 "name": "NewBaseBdev", 00:21:26.762 "aliases": [ 00:21:26.762 "59ed58a9-d446-4376-9fdf-4e1f471eeacf" 00:21:26.762 ], 00:21:26.762 "product_name": "Malloc disk", 00:21:26.762 "block_size": 512, 00:21:26.762 "num_blocks": 65536, 00:21:26.762 "uuid": "59ed58a9-d446-4376-9fdf-4e1f471eeacf", 00:21:26.762 "assigned_rate_limits": { 00:21:26.762 "rw_ios_per_sec": 0, 00:21:26.762 "rw_mbytes_per_sec": 0, 00:21:26.762 "r_mbytes_per_sec": 0, 00:21:26.762 "w_mbytes_per_sec": 0 00:21:26.762 }, 00:21:26.762 "claimed": true, 00:21:26.762 "claim_type": "exclusive_write", 00:21:26.762 "zoned": false, 00:21:26.762 "supported_io_types": { 00:21:26.762 "read": true, 00:21:26.762 "write": true, 00:21:26.762 "unmap": true, 00:21:26.762 "flush": true, 00:21:26.762 "reset": true, 00:21:26.762 "nvme_admin": false, 00:21:26.762 "nvme_io": false, 00:21:26.762 "nvme_io_md": false, 00:21:26.762 "write_zeroes": true, 00:21:26.762 "zcopy": true, 00:21:26.762 "get_zone_info": false, 00:21:26.762 "zone_management": false, 00:21:26.762 "zone_append": false, 00:21:26.762 "compare": false, 00:21:26.762 "compare_and_write": false, 00:21:26.762 "abort": true, 00:21:26.762 "seek_hole": false, 00:21:26.762 "seek_data": false, 00:21:26.762 "copy": true, 00:21:26.762 "nvme_iov_md": false 00:21:26.762 }, 00:21:26.762 "memory_domains": [ 00:21:26.762 { 00:21:26.762 "dma_device_id": "system", 00:21:26.762 "dma_device_type": 1 00:21:26.762 }, 00:21:26.762 { 00:21:26.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.762 "dma_device_type": 2 00:21:26.762 } 00:21:26.762 ], 00:21:26.762 "driver_specific": {} 00:21:26.762 }' 00:21:26.762 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.762 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:27.041 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:27.300 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:27.300 "name": "BaseBdev2", 00:21:27.300 "aliases": [ 00:21:27.300 "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5" 00:21:27.300 ], 00:21:27.300 "product_name": "Malloc disk", 00:21:27.300 "block_size": 512, 00:21:27.300 "num_blocks": 65536, 00:21:27.300 "uuid": "f3b0a77e-8c76-4016-b38c-2b46d1bd12d5", 00:21:27.300 "assigned_rate_limits": { 00:21:27.300 "rw_ios_per_sec": 0, 00:21:27.300 "rw_mbytes_per_sec": 0, 00:21:27.300 "r_mbytes_per_sec": 0, 00:21:27.300 "w_mbytes_per_sec": 0 00:21:27.300 }, 00:21:27.300 "claimed": true, 00:21:27.300 "claim_type": "exclusive_write", 00:21:27.300 "zoned": false, 00:21:27.300 "supported_io_types": { 00:21:27.300 "read": true, 00:21:27.300 "write": true, 00:21:27.300 "unmap": true, 00:21:27.300 "flush": true, 00:21:27.300 "reset": true, 00:21:27.300 "nvme_admin": false, 00:21:27.300 "nvme_io": false, 00:21:27.300 "nvme_io_md": false, 00:21:27.300 "write_zeroes": true, 00:21:27.300 "zcopy": true, 00:21:27.300 "get_zone_info": false, 00:21:27.300 "zone_management": false, 00:21:27.300 "zone_append": false, 00:21:27.300 "compare": false, 00:21:27.300 "compare_and_write": false, 00:21:27.300 "abort": true, 00:21:27.300 "seek_hole": false, 00:21:27.300 "seek_data": false, 00:21:27.300 "copy": true, 00:21:27.300 "nvme_iov_md": false 00:21:27.300 }, 00:21:27.300 "memory_domains": [ 00:21:27.300 { 00:21:27.300 "dma_device_id": "system", 00:21:27.300 "dma_device_type": 1 00:21:27.300 }, 00:21:27.300 { 00:21:27.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.300 "dma_device_type": 2 00:21:27.300 } 00:21:27.300 ], 00:21:27.300 "driver_specific": {} 00:21:27.300 }' 00:21:27.300 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:27.558 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:27.558 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:27.558 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.558 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.558 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:27.558 13:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.558 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.558 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:27.558 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.817 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.817 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:27.817 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:27.817 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:27.817 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:28.388 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:28.388 "name": "BaseBdev3", 00:21:28.388 "aliases": [ 00:21:28.388 "58f20572-5145-494e-b6ca-6f84049eaf99" 00:21:28.388 ], 00:21:28.388 "product_name": "Malloc disk", 00:21:28.388 "block_size": 512, 00:21:28.388 "num_blocks": 65536, 00:21:28.388 "uuid": "58f20572-5145-494e-b6ca-6f84049eaf99", 00:21:28.388 "assigned_rate_limits": { 00:21:28.388 "rw_ios_per_sec": 0, 00:21:28.388 "rw_mbytes_per_sec": 0, 00:21:28.388 "r_mbytes_per_sec": 0, 00:21:28.388 "w_mbytes_per_sec": 0 00:21:28.388 }, 00:21:28.388 "claimed": true, 00:21:28.388 "claim_type": "exclusive_write", 00:21:28.388 "zoned": false, 00:21:28.388 "supported_io_types": { 00:21:28.388 "read": true, 00:21:28.388 "write": true, 00:21:28.388 "unmap": true, 00:21:28.388 "flush": true, 00:21:28.388 "reset": true, 00:21:28.388 "nvme_admin": false, 00:21:28.388 "nvme_io": false, 00:21:28.388 "nvme_io_md": false, 00:21:28.388 "write_zeroes": true, 00:21:28.388 "zcopy": true, 00:21:28.388 "get_zone_info": false, 00:21:28.388 "zone_management": false, 00:21:28.388 "zone_append": false, 00:21:28.388 "compare": false, 00:21:28.388 "compare_and_write": false, 00:21:28.388 "abort": true, 00:21:28.388 "seek_hole": false, 00:21:28.388 "seek_data": false, 00:21:28.388 "copy": true, 00:21:28.388 "nvme_iov_md": false 00:21:28.388 }, 00:21:28.388 "memory_domains": [ 00:21:28.388 { 00:21:28.388 "dma_device_id": "system", 00:21:28.388 "dma_device_type": 1 00:21:28.388 }, 00:21:28.388 { 00:21:28.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.388 "dma_device_type": 2 00:21:28.388 } 00:21:28.388 ], 00:21:28.388 "driver_specific": {} 00:21:28.388 }' 00:21:28.388 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:28.388 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:28.388 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:28.388 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:28.388 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:28.388 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:28.388 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:28.647 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:28.647 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:28.647 13:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:28.647 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:28.647 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:28.647 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:28.647 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:28.647 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:28.906 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:28.906 "name": "BaseBdev4", 00:21:28.906 "aliases": [ 00:21:28.906 "10cab122-9730-45b2-8597-4ccd52e20b58" 00:21:28.906 ], 00:21:28.906 "product_name": "Malloc disk", 00:21:28.906 "block_size": 512, 00:21:28.906 "num_blocks": 65536, 00:21:28.906 "uuid": "10cab122-9730-45b2-8597-4ccd52e20b58", 00:21:28.906 "assigned_rate_limits": { 00:21:28.906 "rw_ios_per_sec": 0, 00:21:28.906 "rw_mbytes_per_sec": 0, 00:21:28.906 "r_mbytes_per_sec": 0, 00:21:28.906 "w_mbytes_per_sec": 0 00:21:28.906 }, 00:21:28.906 "claimed": true, 00:21:28.906 "claim_type": "exclusive_write", 00:21:28.906 "zoned": false, 00:21:28.906 "supported_io_types": { 00:21:28.906 "read": true, 00:21:28.906 "write": true, 00:21:28.906 "unmap": true, 00:21:28.906 "flush": true, 00:21:28.906 "reset": true, 00:21:28.906 "nvme_admin": false, 00:21:28.906 "nvme_io": false, 00:21:28.906 "nvme_io_md": false, 00:21:28.906 "write_zeroes": true, 00:21:28.906 "zcopy": true, 00:21:28.906 "get_zone_info": false, 00:21:28.906 "zone_management": false, 00:21:28.906 "zone_append": false, 00:21:28.906 "compare": false, 00:21:28.906 "compare_and_write": false, 00:21:28.906 "abort": true, 00:21:28.906 "seek_hole": false, 00:21:28.906 "seek_data": false, 00:21:28.906 "copy": true, 00:21:28.906 "nvme_iov_md": false 00:21:28.906 }, 00:21:28.906 "memory_domains": [ 00:21:28.906 { 00:21:28.906 "dma_device_id": "system", 00:21:28.906 "dma_device_type": 1 00:21:28.906 }, 00:21:28.906 { 00:21:28.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.907 "dma_device_type": 2 00:21:28.907 } 00:21:28.907 ], 00:21:28.907 "driver_specific": {} 00:21:28.907 }' 00:21:28.907 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:28.907 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:28.907 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:28.907 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:29.166 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:29.166 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:29.166 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:29.166 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:29.166 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:29.166 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:29.166 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:29.166 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:29.166 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:29.425 [2024-07-25 13:21:39.852668] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:29.425 [2024-07-25 13:21:39.852691] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:29.425 [2024-07-25 13:21:39.852732] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:29.425 [2024-07-25 13:21:39.852980] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:29.425 [2024-07-25 13:21:39.852991] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ae1360 name Existed_Raid, state offline 00:21:29.425 13:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 935733 00:21:29.425 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 935733 ']' 00:21:29.425 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 935733 00:21:29.425 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:21:29.425 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:29.425 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 935733 00:21:29.685 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:29.685 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:29.685 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 935733' 00:21:29.685 killing process with pid 935733 00:21:29.685 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 935733 00:21:29.685 [2024-07-25 13:21:39.931384] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:29.685 13:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 935733 00:21:29.685 [2024-07-25 13:21:39.961724] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:29.685 13:21:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:29.685 00:21:29.685 real 0m31.142s 00:21:29.685 user 0m57.256s 00:21:29.685 sys 0m5.548s 00:21:29.685 13:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:29.685 13:21:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.685 ************************************ 00:21:29.685 END TEST raid_state_function_test 00:21:29.685 ************************************ 00:21:29.944 13:21:40 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:21:29.944 13:21:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:29.944 13:21:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:29.944 13:21:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:29.944 ************************************ 00:21:29.944 START TEST raid_state_function_test_sb 00:21:29.944 ************************************ 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:29.944 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=941707 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 941707' 00:21:29.945 Process raid pid: 941707 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 941707 /var/tmp/spdk-raid.sock 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 941707 ']' 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:29.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:29.945 13:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.945 [2024-07-25 13:21:40.305519] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:21:29.945 [2024-07-25 13:21:40.305575] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:29.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:29.945 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:30.204 [2024-07-25 13:21:40.437988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.204 [2024-07-25 13:21:40.524204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.204 [2024-07-25 13:21:40.583846] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.204 [2024-07-25 13:21:40.583881] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.773 13:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:30.773 13:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:21:30.773 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:31.032 [2024-07-25 13:21:41.431428] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:31.032 [2024-07-25 13:21:41.431465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:31.032 [2024-07-25 13:21:41.431475] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:31.032 [2024-07-25 13:21:41.431486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:31.032 [2024-07-25 13:21:41.431494] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:31.032 [2024-07-25 13:21:41.431504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:31.032 [2024-07-25 13:21:41.431512] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:31.032 [2024-07-25 13:21:41.431522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:31.032 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:31.032 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:31.032 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:31.032 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:31.032 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:31.032 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:31.032 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:31.032 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:31.032 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:31.032 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:31.032 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.032 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.291 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.291 "name": "Existed_Raid", 00:21:31.291 "uuid": "9f494332-f42b-462b-86dc-cdb0463548c8", 00:21:31.291 "strip_size_kb": 0, 00:21:31.291 "state": "configuring", 00:21:31.291 "raid_level": "raid1", 00:21:31.291 "superblock": true, 00:21:31.291 "num_base_bdevs": 4, 00:21:31.291 "num_base_bdevs_discovered": 0, 00:21:31.291 "num_base_bdevs_operational": 4, 00:21:31.291 "base_bdevs_list": [ 00:21:31.291 { 00:21:31.291 "name": "BaseBdev1", 00:21:31.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.291 "is_configured": false, 00:21:31.291 "data_offset": 0, 00:21:31.291 "data_size": 0 00:21:31.291 }, 00:21:31.291 { 00:21:31.291 "name": "BaseBdev2", 00:21:31.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.291 "is_configured": false, 00:21:31.291 "data_offset": 0, 00:21:31.291 "data_size": 0 00:21:31.291 }, 00:21:31.291 { 00:21:31.291 "name": "BaseBdev3", 00:21:31.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.291 "is_configured": false, 00:21:31.291 "data_offset": 0, 00:21:31.291 "data_size": 0 00:21:31.291 }, 00:21:31.291 { 00:21:31.291 "name": "BaseBdev4", 00:21:31.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.291 "is_configured": false, 00:21:31.291 "data_offset": 0, 00:21:31.291 "data_size": 0 00:21:31.291 } 00:21:31.291 ] 00:21:31.291 }' 00:21:31.291 13:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.291 13:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.859 13:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:32.119 [2024-07-25 13:21:42.446152] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:32.119 [2024-07-25 13:21:42.446187] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bc7f60 name Existed_Raid, state configuring 00:21:32.119 13:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:32.378 [2024-07-25 13:21:42.674768] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:32.378 [2024-07-25 13:21:42.674795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:32.378 [2024-07-25 13:21:42.674804] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:32.378 [2024-07-25 13:21:42.674815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:32.378 [2024-07-25 13:21:42.674823] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:32.378 [2024-07-25 13:21:42.674833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:32.378 [2024-07-25 13:21:42.674841] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:32.378 [2024-07-25 13:21:42.674851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:32.378 13:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:32.378 [2024-07-25 13:21:42.864609] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:32.378 BaseBdev1 00:21:32.637 13:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:32.637 13:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:32.637 13:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:32.637 13:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:32.637 13:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:32.637 13:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:32.637 13:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:32.637 13:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:32.897 [ 00:21:32.897 { 00:21:32.897 "name": "BaseBdev1", 00:21:32.897 "aliases": [ 00:21:32.897 "e9243e6e-b09e-4cd2-b48a-613d63691952" 00:21:32.897 ], 00:21:32.897 "product_name": "Malloc disk", 00:21:32.897 "block_size": 512, 00:21:32.897 "num_blocks": 65536, 00:21:32.897 "uuid": "e9243e6e-b09e-4cd2-b48a-613d63691952", 00:21:32.897 "assigned_rate_limits": { 00:21:32.897 "rw_ios_per_sec": 0, 00:21:32.897 "rw_mbytes_per_sec": 0, 00:21:32.897 "r_mbytes_per_sec": 0, 00:21:32.897 "w_mbytes_per_sec": 0 00:21:32.897 }, 00:21:32.897 "claimed": true, 00:21:32.897 "claim_type": "exclusive_write", 00:21:32.897 "zoned": false, 00:21:32.897 "supported_io_types": { 00:21:32.897 "read": true, 00:21:32.897 "write": true, 00:21:32.897 "unmap": true, 00:21:32.897 "flush": true, 00:21:32.897 "reset": true, 00:21:32.897 "nvme_admin": false, 00:21:32.897 "nvme_io": false, 00:21:32.897 "nvme_io_md": false, 00:21:32.897 "write_zeroes": true, 00:21:32.897 "zcopy": true, 00:21:32.897 "get_zone_info": false, 00:21:32.897 "zone_management": false, 00:21:32.897 "zone_append": false, 00:21:32.897 "compare": false, 00:21:32.897 "compare_and_write": false, 00:21:32.897 "abort": true, 00:21:32.897 "seek_hole": false, 00:21:32.897 "seek_data": false, 00:21:32.897 "copy": true, 00:21:32.897 "nvme_iov_md": false 00:21:32.897 }, 00:21:32.897 "memory_domains": [ 00:21:32.897 { 00:21:32.897 "dma_device_id": "system", 00:21:32.897 "dma_device_type": 1 00:21:32.897 }, 00:21:32.897 { 00:21:32.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.897 "dma_device_type": 2 00:21:32.897 } 00:21:32.897 ], 00:21:32.897 "driver_specific": {} 00:21:32.897 } 00:21:32.897 ] 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.897 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.156 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.156 "name": "Existed_Raid", 00:21:33.156 "uuid": "b5a8c71c-9191-448b-aeff-ae3561121618", 00:21:33.156 "strip_size_kb": 0, 00:21:33.156 "state": "configuring", 00:21:33.156 "raid_level": "raid1", 00:21:33.156 "superblock": true, 00:21:33.156 "num_base_bdevs": 4, 00:21:33.156 "num_base_bdevs_discovered": 1, 00:21:33.156 "num_base_bdevs_operational": 4, 00:21:33.156 "base_bdevs_list": [ 00:21:33.156 { 00:21:33.156 "name": "BaseBdev1", 00:21:33.156 "uuid": "e9243e6e-b09e-4cd2-b48a-613d63691952", 00:21:33.156 "is_configured": true, 00:21:33.156 "data_offset": 2048, 00:21:33.156 "data_size": 63488 00:21:33.156 }, 00:21:33.156 { 00:21:33.156 "name": "BaseBdev2", 00:21:33.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.156 "is_configured": false, 00:21:33.156 "data_offset": 0, 00:21:33.156 "data_size": 0 00:21:33.156 }, 00:21:33.156 { 00:21:33.156 "name": "BaseBdev3", 00:21:33.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.156 "is_configured": false, 00:21:33.156 "data_offset": 0, 00:21:33.156 "data_size": 0 00:21:33.156 }, 00:21:33.156 { 00:21:33.156 "name": "BaseBdev4", 00:21:33.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.156 "is_configured": false, 00:21:33.156 "data_offset": 0, 00:21:33.156 "data_size": 0 00:21:33.156 } 00:21:33.156 ] 00:21:33.156 }' 00:21:33.156 13:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.156 13:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.722 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:33.981 [2024-07-25 13:21:44.272304] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:33.981 [2024-07-25 13:21:44.272337] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bc77d0 name Existed_Raid, state configuring 00:21:33.981 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:34.239 [2024-07-25 13:21:44.500950] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:34.239 [2024-07-25 13:21:44.502341] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:34.239 [2024-07-25 13:21:44.502377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:34.239 [2024-07-25 13:21:44.502387] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:34.239 [2024-07-25 13:21:44.502398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:34.239 [2024-07-25 13:21:44.502406] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:34.239 [2024-07-25 13:21:44.502416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.239 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.498 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:34.498 "name": "Existed_Raid", 00:21:34.498 "uuid": "03905259-538a-4990-9a77-ac7d4fe341fe", 00:21:34.498 "strip_size_kb": 0, 00:21:34.498 "state": "configuring", 00:21:34.498 "raid_level": "raid1", 00:21:34.498 "superblock": true, 00:21:34.498 "num_base_bdevs": 4, 00:21:34.498 "num_base_bdevs_discovered": 1, 00:21:34.498 "num_base_bdevs_operational": 4, 00:21:34.498 "base_bdevs_list": [ 00:21:34.498 { 00:21:34.498 "name": "BaseBdev1", 00:21:34.498 "uuid": "e9243e6e-b09e-4cd2-b48a-613d63691952", 00:21:34.498 "is_configured": true, 00:21:34.498 "data_offset": 2048, 00:21:34.498 "data_size": 63488 00:21:34.498 }, 00:21:34.498 { 00:21:34.498 "name": "BaseBdev2", 00:21:34.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.498 "is_configured": false, 00:21:34.498 "data_offset": 0, 00:21:34.498 "data_size": 0 00:21:34.498 }, 00:21:34.498 { 00:21:34.498 "name": "BaseBdev3", 00:21:34.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.498 "is_configured": false, 00:21:34.498 "data_offset": 0, 00:21:34.498 "data_size": 0 00:21:34.498 }, 00:21:34.498 { 00:21:34.498 "name": "BaseBdev4", 00:21:34.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.498 "is_configured": false, 00:21:34.498 "data_offset": 0, 00:21:34.498 "data_size": 0 00:21:34.498 } 00:21:34.498 ] 00:21:34.498 }' 00:21:34.498 13:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:34.498 13:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.066 13:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:35.066 [2024-07-25 13:21:45.538792] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.066 BaseBdev2 00:21:35.327 13:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:35.327 13:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:35.327 13:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:35.327 13:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:35.327 13:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:35.327 13:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:35.327 13:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:35.327 13:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:35.585 [ 00:21:35.585 { 00:21:35.585 "name": "BaseBdev2", 00:21:35.585 "aliases": [ 00:21:35.585 "7edf7934-c5f7-462d-9a98-e89ccd656dc3" 00:21:35.585 ], 00:21:35.585 "product_name": "Malloc disk", 00:21:35.585 "block_size": 512, 00:21:35.585 "num_blocks": 65536, 00:21:35.585 "uuid": "7edf7934-c5f7-462d-9a98-e89ccd656dc3", 00:21:35.585 "assigned_rate_limits": { 00:21:35.585 "rw_ios_per_sec": 0, 00:21:35.585 "rw_mbytes_per_sec": 0, 00:21:35.585 "r_mbytes_per_sec": 0, 00:21:35.585 "w_mbytes_per_sec": 0 00:21:35.585 }, 00:21:35.585 "claimed": true, 00:21:35.585 "claim_type": "exclusive_write", 00:21:35.585 "zoned": false, 00:21:35.585 "supported_io_types": { 00:21:35.585 "read": true, 00:21:35.585 "write": true, 00:21:35.585 "unmap": true, 00:21:35.585 "flush": true, 00:21:35.585 "reset": true, 00:21:35.585 "nvme_admin": false, 00:21:35.585 "nvme_io": false, 00:21:35.585 "nvme_io_md": false, 00:21:35.585 "write_zeroes": true, 00:21:35.585 "zcopy": true, 00:21:35.586 "get_zone_info": false, 00:21:35.586 "zone_management": false, 00:21:35.586 "zone_append": false, 00:21:35.586 "compare": false, 00:21:35.586 "compare_and_write": false, 00:21:35.586 "abort": true, 00:21:35.586 "seek_hole": false, 00:21:35.586 "seek_data": false, 00:21:35.586 "copy": true, 00:21:35.586 "nvme_iov_md": false 00:21:35.586 }, 00:21:35.586 "memory_domains": [ 00:21:35.586 { 00:21:35.586 "dma_device_id": "system", 00:21:35.586 "dma_device_type": 1 00:21:35.586 }, 00:21:35.586 { 00:21:35.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.586 "dma_device_type": 2 00:21:35.586 } 00:21:35.586 ], 00:21:35.586 "driver_specific": {} 00:21:35.586 } 00:21:35.586 ] 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.586 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.844 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.844 "name": "Existed_Raid", 00:21:35.844 "uuid": "03905259-538a-4990-9a77-ac7d4fe341fe", 00:21:35.844 "strip_size_kb": 0, 00:21:35.845 "state": "configuring", 00:21:35.845 "raid_level": "raid1", 00:21:35.845 "superblock": true, 00:21:35.845 "num_base_bdevs": 4, 00:21:35.845 "num_base_bdevs_discovered": 2, 00:21:35.845 "num_base_bdevs_operational": 4, 00:21:35.845 "base_bdevs_list": [ 00:21:35.845 { 00:21:35.845 "name": "BaseBdev1", 00:21:35.845 "uuid": "e9243e6e-b09e-4cd2-b48a-613d63691952", 00:21:35.845 "is_configured": true, 00:21:35.845 "data_offset": 2048, 00:21:35.845 "data_size": 63488 00:21:35.845 }, 00:21:35.845 { 00:21:35.845 "name": "BaseBdev2", 00:21:35.845 "uuid": "7edf7934-c5f7-462d-9a98-e89ccd656dc3", 00:21:35.845 "is_configured": true, 00:21:35.845 "data_offset": 2048, 00:21:35.845 "data_size": 63488 00:21:35.845 }, 00:21:35.845 { 00:21:35.845 "name": "BaseBdev3", 00:21:35.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.845 "is_configured": false, 00:21:35.845 "data_offset": 0, 00:21:35.845 "data_size": 0 00:21:35.845 }, 00:21:35.845 { 00:21:35.845 "name": "BaseBdev4", 00:21:35.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.845 "is_configured": false, 00:21:35.845 "data_offset": 0, 00:21:35.845 "data_size": 0 00:21:35.845 } 00:21:35.845 ] 00:21:35.845 }' 00:21:35.845 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.845 13:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.412 13:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:36.671 [2024-07-25 13:21:47.045893] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:36.671 BaseBdev3 00:21:36.671 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:36.671 13:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:36.671 13:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:36.671 13:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:36.671 13:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:36.671 13:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:36.671 13:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:36.929 13:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:37.189 [ 00:21:37.189 { 00:21:37.189 "name": "BaseBdev3", 00:21:37.189 "aliases": [ 00:21:37.189 "f4ae11a5-6124-46c1-8230-529fae127185" 00:21:37.189 ], 00:21:37.189 "product_name": "Malloc disk", 00:21:37.189 "block_size": 512, 00:21:37.189 "num_blocks": 65536, 00:21:37.189 "uuid": "f4ae11a5-6124-46c1-8230-529fae127185", 00:21:37.189 "assigned_rate_limits": { 00:21:37.189 "rw_ios_per_sec": 0, 00:21:37.189 "rw_mbytes_per_sec": 0, 00:21:37.189 "r_mbytes_per_sec": 0, 00:21:37.189 "w_mbytes_per_sec": 0 00:21:37.189 }, 00:21:37.189 "claimed": true, 00:21:37.189 "claim_type": "exclusive_write", 00:21:37.189 "zoned": false, 00:21:37.189 "supported_io_types": { 00:21:37.189 "read": true, 00:21:37.189 "write": true, 00:21:37.189 "unmap": true, 00:21:37.189 "flush": true, 00:21:37.189 "reset": true, 00:21:37.189 "nvme_admin": false, 00:21:37.189 "nvme_io": false, 00:21:37.189 "nvme_io_md": false, 00:21:37.189 "write_zeroes": true, 00:21:37.189 "zcopy": true, 00:21:37.189 "get_zone_info": false, 00:21:37.189 "zone_management": false, 00:21:37.189 "zone_append": false, 00:21:37.189 "compare": false, 00:21:37.189 "compare_and_write": false, 00:21:37.189 "abort": true, 00:21:37.189 "seek_hole": false, 00:21:37.189 "seek_data": false, 00:21:37.189 "copy": true, 00:21:37.189 "nvme_iov_md": false 00:21:37.189 }, 00:21:37.189 "memory_domains": [ 00:21:37.189 { 00:21:37.189 "dma_device_id": "system", 00:21:37.189 "dma_device_type": 1 00:21:37.189 }, 00:21:37.189 { 00:21:37.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.189 "dma_device_type": 2 00:21:37.189 } 00:21:37.189 ], 00:21:37.189 "driver_specific": {} 00:21:37.189 } 00:21:37.189 ] 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.189 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.448 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.448 "name": "Existed_Raid", 00:21:37.448 "uuid": "03905259-538a-4990-9a77-ac7d4fe341fe", 00:21:37.448 "strip_size_kb": 0, 00:21:37.448 "state": "configuring", 00:21:37.448 "raid_level": "raid1", 00:21:37.448 "superblock": true, 00:21:37.448 "num_base_bdevs": 4, 00:21:37.448 "num_base_bdevs_discovered": 3, 00:21:37.448 "num_base_bdevs_operational": 4, 00:21:37.448 "base_bdevs_list": [ 00:21:37.448 { 00:21:37.448 "name": "BaseBdev1", 00:21:37.448 "uuid": "e9243e6e-b09e-4cd2-b48a-613d63691952", 00:21:37.448 "is_configured": true, 00:21:37.448 "data_offset": 2048, 00:21:37.448 "data_size": 63488 00:21:37.448 }, 00:21:37.448 { 00:21:37.448 "name": "BaseBdev2", 00:21:37.448 "uuid": "7edf7934-c5f7-462d-9a98-e89ccd656dc3", 00:21:37.448 "is_configured": true, 00:21:37.448 "data_offset": 2048, 00:21:37.448 "data_size": 63488 00:21:37.448 }, 00:21:37.448 { 00:21:37.448 "name": "BaseBdev3", 00:21:37.448 "uuid": "f4ae11a5-6124-46c1-8230-529fae127185", 00:21:37.448 "is_configured": true, 00:21:37.448 "data_offset": 2048, 00:21:37.448 "data_size": 63488 00:21:37.448 }, 00:21:37.448 { 00:21:37.448 "name": "BaseBdev4", 00:21:37.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.448 "is_configured": false, 00:21:37.448 "data_offset": 0, 00:21:37.448 "data_size": 0 00:21:37.448 } 00:21:37.448 ] 00:21:37.448 }' 00:21:37.448 13:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.448 13:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.015 13:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:38.274 [2024-07-25 13:21:48.548963] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:38.274 [2024-07-25 13:21:48.549119] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bc8840 00:21:38.274 [2024-07-25 13:21:48.549131] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:38.274 [2024-07-25 13:21:48.549303] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bc8480 00:21:38.274 [2024-07-25 13:21:48.549423] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bc8840 00:21:38.274 [2024-07-25 13:21:48.549432] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1bc8840 00:21:38.274 [2024-07-25 13:21:48.549517] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.274 BaseBdev4 00:21:38.274 13:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:38.274 13:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:38.274 13:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:38.274 13:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:38.274 13:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:38.274 13:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:38.274 13:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:38.533 13:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:38.533 [ 00:21:38.533 { 00:21:38.533 "name": "BaseBdev4", 00:21:38.533 "aliases": [ 00:21:38.533 "8567dc90-7b3d-42a0-a5a4-86187d47fdaa" 00:21:38.533 ], 00:21:38.533 "product_name": "Malloc disk", 00:21:38.533 "block_size": 512, 00:21:38.533 "num_blocks": 65536, 00:21:38.533 "uuid": "8567dc90-7b3d-42a0-a5a4-86187d47fdaa", 00:21:38.533 "assigned_rate_limits": { 00:21:38.533 "rw_ios_per_sec": 0, 00:21:38.533 "rw_mbytes_per_sec": 0, 00:21:38.533 "r_mbytes_per_sec": 0, 00:21:38.533 "w_mbytes_per_sec": 0 00:21:38.533 }, 00:21:38.533 "claimed": true, 00:21:38.533 "claim_type": "exclusive_write", 00:21:38.533 "zoned": false, 00:21:38.533 "supported_io_types": { 00:21:38.533 "read": true, 00:21:38.533 "write": true, 00:21:38.533 "unmap": true, 00:21:38.533 "flush": true, 00:21:38.533 "reset": true, 00:21:38.533 "nvme_admin": false, 00:21:38.533 "nvme_io": false, 00:21:38.533 "nvme_io_md": false, 00:21:38.533 "write_zeroes": true, 00:21:38.533 "zcopy": true, 00:21:38.533 "get_zone_info": false, 00:21:38.533 "zone_management": false, 00:21:38.533 "zone_append": false, 00:21:38.533 "compare": false, 00:21:38.533 "compare_and_write": false, 00:21:38.533 "abort": true, 00:21:38.533 "seek_hole": false, 00:21:38.533 "seek_data": false, 00:21:38.533 "copy": true, 00:21:38.533 "nvme_iov_md": false 00:21:38.533 }, 00:21:38.533 "memory_domains": [ 00:21:38.533 { 00:21:38.533 "dma_device_id": "system", 00:21:38.533 "dma_device_type": 1 00:21:38.533 }, 00:21:38.533 { 00:21:38.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.533 "dma_device_type": 2 00:21:38.533 } 00:21:38.533 ], 00:21:38.533 "driver_specific": {} 00:21:38.533 } 00:21:38.533 ] 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.533 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.793 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.793 "name": "Existed_Raid", 00:21:38.793 "uuid": "03905259-538a-4990-9a77-ac7d4fe341fe", 00:21:38.793 "strip_size_kb": 0, 00:21:38.793 "state": "online", 00:21:38.793 "raid_level": "raid1", 00:21:38.793 "superblock": true, 00:21:38.793 "num_base_bdevs": 4, 00:21:38.793 "num_base_bdevs_discovered": 4, 00:21:38.793 "num_base_bdevs_operational": 4, 00:21:38.793 "base_bdevs_list": [ 00:21:38.793 { 00:21:38.793 "name": "BaseBdev1", 00:21:38.793 "uuid": "e9243e6e-b09e-4cd2-b48a-613d63691952", 00:21:38.793 "is_configured": true, 00:21:38.793 "data_offset": 2048, 00:21:38.793 "data_size": 63488 00:21:38.793 }, 00:21:38.793 { 00:21:38.793 "name": "BaseBdev2", 00:21:38.793 "uuid": "7edf7934-c5f7-462d-9a98-e89ccd656dc3", 00:21:38.793 "is_configured": true, 00:21:38.793 "data_offset": 2048, 00:21:38.793 "data_size": 63488 00:21:38.793 }, 00:21:38.793 { 00:21:38.793 "name": "BaseBdev3", 00:21:38.793 "uuid": "f4ae11a5-6124-46c1-8230-529fae127185", 00:21:38.793 "is_configured": true, 00:21:38.793 "data_offset": 2048, 00:21:38.793 "data_size": 63488 00:21:38.793 }, 00:21:38.793 { 00:21:38.793 "name": "BaseBdev4", 00:21:38.793 "uuid": "8567dc90-7b3d-42a0-a5a4-86187d47fdaa", 00:21:38.793 "is_configured": true, 00:21:38.793 "data_offset": 2048, 00:21:38.793 "data_size": 63488 00:21:38.793 } 00:21:38.793 ] 00:21:38.793 }' 00:21:38.793 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.793 13:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.362 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:39.362 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:39.362 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:39.362 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:39.362 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:39.362 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:39.362 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:39.362 13:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:39.624 [2024-07-25 13:21:50.029385] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.624 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:39.624 "name": "Existed_Raid", 00:21:39.624 "aliases": [ 00:21:39.624 "03905259-538a-4990-9a77-ac7d4fe341fe" 00:21:39.624 ], 00:21:39.624 "product_name": "Raid Volume", 00:21:39.624 "block_size": 512, 00:21:39.624 "num_blocks": 63488, 00:21:39.624 "uuid": "03905259-538a-4990-9a77-ac7d4fe341fe", 00:21:39.624 "assigned_rate_limits": { 00:21:39.624 "rw_ios_per_sec": 0, 00:21:39.624 "rw_mbytes_per_sec": 0, 00:21:39.624 "r_mbytes_per_sec": 0, 00:21:39.624 "w_mbytes_per_sec": 0 00:21:39.624 }, 00:21:39.624 "claimed": false, 00:21:39.624 "zoned": false, 00:21:39.624 "supported_io_types": { 00:21:39.624 "read": true, 00:21:39.624 "write": true, 00:21:39.624 "unmap": false, 00:21:39.624 "flush": false, 00:21:39.624 "reset": true, 00:21:39.624 "nvme_admin": false, 00:21:39.624 "nvme_io": false, 00:21:39.624 "nvme_io_md": false, 00:21:39.624 "write_zeroes": true, 00:21:39.624 "zcopy": false, 00:21:39.624 "get_zone_info": false, 00:21:39.624 "zone_management": false, 00:21:39.624 "zone_append": false, 00:21:39.624 "compare": false, 00:21:39.624 "compare_and_write": false, 00:21:39.624 "abort": false, 00:21:39.624 "seek_hole": false, 00:21:39.624 "seek_data": false, 00:21:39.624 "copy": false, 00:21:39.624 "nvme_iov_md": false 00:21:39.624 }, 00:21:39.624 "memory_domains": [ 00:21:39.624 { 00:21:39.624 "dma_device_id": "system", 00:21:39.624 "dma_device_type": 1 00:21:39.624 }, 00:21:39.624 { 00:21:39.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.624 "dma_device_type": 2 00:21:39.624 }, 00:21:39.624 { 00:21:39.624 "dma_device_id": "system", 00:21:39.624 "dma_device_type": 1 00:21:39.624 }, 00:21:39.624 { 00:21:39.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.624 "dma_device_type": 2 00:21:39.624 }, 00:21:39.624 { 00:21:39.624 "dma_device_id": "system", 00:21:39.624 "dma_device_type": 1 00:21:39.624 }, 00:21:39.624 { 00:21:39.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.624 "dma_device_type": 2 00:21:39.624 }, 00:21:39.624 { 00:21:39.624 "dma_device_id": "system", 00:21:39.624 "dma_device_type": 1 00:21:39.624 }, 00:21:39.624 { 00:21:39.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.624 "dma_device_type": 2 00:21:39.624 } 00:21:39.624 ], 00:21:39.624 "driver_specific": { 00:21:39.624 "raid": { 00:21:39.624 "uuid": "03905259-538a-4990-9a77-ac7d4fe341fe", 00:21:39.624 "strip_size_kb": 0, 00:21:39.624 "state": "online", 00:21:39.624 "raid_level": "raid1", 00:21:39.624 "superblock": true, 00:21:39.624 "num_base_bdevs": 4, 00:21:39.624 "num_base_bdevs_discovered": 4, 00:21:39.624 "num_base_bdevs_operational": 4, 00:21:39.624 "base_bdevs_list": [ 00:21:39.624 { 00:21:39.624 "name": "BaseBdev1", 00:21:39.624 "uuid": "e9243e6e-b09e-4cd2-b48a-613d63691952", 00:21:39.624 "is_configured": true, 00:21:39.624 "data_offset": 2048, 00:21:39.624 "data_size": 63488 00:21:39.624 }, 00:21:39.624 { 00:21:39.624 "name": "BaseBdev2", 00:21:39.624 "uuid": "7edf7934-c5f7-462d-9a98-e89ccd656dc3", 00:21:39.624 "is_configured": true, 00:21:39.624 "data_offset": 2048, 00:21:39.624 "data_size": 63488 00:21:39.624 }, 00:21:39.624 { 00:21:39.624 "name": "BaseBdev3", 00:21:39.624 "uuid": "f4ae11a5-6124-46c1-8230-529fae127185", 00:21:39.624 "is_configured": true, 00:21:39.624 "data_offset": 2048, 00:21:39.624 "data_size": 63488 00:21:39.624 }, 00:21:39.624 { 00:21:39.624 "name": "BaseBdev4", 00:21:39.624 "uuid": "8567dc90-7b3d-42a0-a5a4-86187d47fdaa", 00:21:39.624 "is_configured": true, 00:21:39.624 "data_offset": 2048, 00:21:39.624 "data_size": 63488 00:21:39.624 } 00:21:39.624 ] 00:21:39.624 } 00:21:39.624 } 00:21:39.624 }' 00:21:39.624 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:39.624 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:39.624 BaseBdev2 00:21:39.624 BaseBdev3 00:21:39.624 BaseBdev4' 00:21:39.625 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:39.625 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:39.625 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:39.918 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:39.918 "name": "BaseBdev1", 00:21:39.918 "aliases": [ 00:21:39.918 "e9243e6e-b09e-4cd2-b48a-613d63691952" 00:21:39.918 ], 00:21:39.918 "product_name": "Malloc disk", 00:21:39.918 "block_size": 512, 00:21:39.918 "num_blocks": 65536, 00:21:39.918 "uuid": "e9243e6e-b09e-4cd2-b48a-613d63691952", 00:21:39.918 "assigned_rate_limits": { 00:21:39.918 "rw_ios_per_sec": 0, 00:21:39.918 "rw_mbytes_per_sec": 0, 00:21:39.918 "r_mbytes_per_sec": 0, 00:21:39.918 "w_mbytes_per_sec": 0 00:21:39.918 }, 00:21:39.918 "claimed": true, 00:21:39.918 "claim_type": "exclusive_write", 00:21:39.918 "zoned": false, 00:21:39.918 "supported_io_types": { 00:21:39.918 "read": true, 00:21:39.918 "write": true, 00:21:39.918 "unmap": true, 00:21:39.918 "flush": true, 00:21:39.918 "reset": true, 00:21:39.918 "nvme_admin": false, 00:21:39.918 "nvme_io": false, 00:21:39.918 "nvme_io_md": false, 00:21:39.918 "write_zeroes": true, 00:21:39.918 "zcopy": true, 00:21:39.918 "get_zone_info": false, 00:21:39.918 "zone_management": false, 00:21:39.918 "zone_append": false, 00:21:39.918 "compare": false, 00:21:39.918 "compare_and_write": false, 00:21:39.918 "abort": true, 00:21:39.918 "seek_hole": false, 00:21:39.918 "seek_data": false, 00:21:39.918 "copy": true, 00:21:39.918 "nvme_iov_md": false 00:21:39.918 }, 00:21:39.918 "memory_domains": [ 00:21:39.918 { 00:21:39.918 "dma_device_id": "system", 00:21:39.918 "dma_device_type": 1 00:21:39.918 }, 00:21:39.918 { 00:21:39.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.918 "dma_device_type": 2 00:21:39.918 } 00:21:39.918 ], 00:21:39.918 "driver_specific": {} 00:21:39.918 }' 00:21:39.918 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.918 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.918 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:40.176 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.176 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.176 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:40.176 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.176 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.176 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:40.176 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:40.176 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:40.176 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:40.176 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:40.176 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:40.176 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:40.434 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:40.434 "name": "BaseBdev2", 00:21:40.434 "aliases": [ 00:21:40.434 "7edf7934-c5f7-462d-9a98-e89ccd656dc3" 00:21:40.434 ], 00:21:40.434 "product_name": "Malloc disk", 00:21:40.434 "block_size": 512, 00:21:40.434 "num_blocks": 65536, 00:21:40.434 "uuid": "7edf7934-c5f7-462d-9a98-e89ccd656dc3", 00:21:40.434 "assigned_rate_limits": { 00:21:40.434 "rw_ios_per_sec": 0, 00:21:40.434 "rw_mbytes_per_sec": 0, 00:21:40.434 "r_mbytes_per_sec": 0, 00:21:40.434 "w_mbytes_per_sec": 0 00:21:40.434 }, 00:21:40.434 "claimed": true, 00:21:40.434 "claim_type": "exclusive_write", 00:21:40.434 "zoned": false, 00:21:40.434 "supported_io_types": { 00:21:40.434 "read": true, 00:21:40.434 "write": true, 00:21:40.434 "unmap": true, 00:21:40.434 "flush": true, 00:21:40.434 "reset": true, 00:21:40.434 "nvme_admin": false, 00:21:40.434 "nvme_io": false, 00:21:40.434 "nvme_io_md": false, 00:21:40.434 "write_zeroes": true, 00:21:40.434 "zcopy": true, 00:21:40.434 "get_zone_info": false, 00:21:40.434 "zone_management": false, 00:21:40.434 "zone_append": false, 00:21:40.434 "compare": false, 00:21:40.434 "compare_and_write": false, 00:21:40.434 "abort": true, 00:21:40.434 "seek_hole": false, 00:21:40.434 "seek_data": false, 00:21:40.434 "copy": true, 00:21:40.434 "nvme_iov_md": false 00:21:40.434 }, 00:21:40.434 "memory_domains": [ 00:21:40.434 { 00:21:40.434 "dma_device_id": "system", 00:21:40.434 "dma_device_type": 1 00:21:40.434 }, 00:21:40.434 { 00:21:40.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.434 "dma_device_type": 2 00:21:40.434 } 00:21:40.434 ], 00:21:40.434 "driver_specific": {} 00:21:40.434 }' 00:21:40.434 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.434 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.692 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:40.692 13:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.692 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.692 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:40.692 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.692 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.692 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:40.692 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:40.950 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:40.950 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:40.950 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:40.950 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:40.950 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:41.208 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:41.209 "name": "BaseBdev3", 00:21:41.209 "aliases": [ 00:21:41.209 "f4ae11a5-6124-46c1-8230-529fae127185" 00:21:41.209 ], 00:21:41.209 "product_name": "Malloc disk", 00:21:41.209 "block_size": 512, 00:21:41.209 "num_blocks": 65536, 00:21:41.209 "uuid": "f4ae11a5-6124-46c1-8230-529fae127185", 00:21:41.209 "assigned_rate_limits": { 00:21:41.209 "rw_ios_per_sec": 0, 00:21:41.209 "rw_mbytes_per_sec": 0, 00:21:41.209 "r_mbytes_per_sec": 0, 00:21:41.209 "w_mbytes_per_sec": 0 00:21:41.209 }, 00:21:41.209 "claimed": true, 00:21:41.209 "claim_type": "exclusive_write", 00:21:41.209 "zoned": false, 00:21:41.209 "supported_io_types": { 00:21:41.209 "read": true, 00:21:41.209 "write": true, 00:21:41.209 "unmap": true, 00:21:41.209 "flush": true, 00:21:41.209 "reset": true, 00:21:41.209 "nvme_admin": false, 00:21:41.209 "nvme_io": false, 00:21:41.209 "nvme_io_md": false, 00:21:41.209 "write_zeroes": true, 00:21:41.209 "zcopy": true, 00:21:41.209 "get_zone_info": false, 00:21:41.209 "zone_management": false, 00:21:41.209 "zone_append": false, 00:21:41.209 "compare": false, 00:21:41.209 "compare_and_write": false, 00:21:41.209 "abort": true, 00:21:41.209 "seek_hole": false, 00:21:41.209 "seek_data": false, 00:21:41.209 "copy": true, 00:21:41.209 "nvme_iov_md": false 00:21:41.209 }, 00:21:41.209 "memory_domains": [ 00:21:41.209 { 00:21:41.209 "dma_device_id": "system", 00:21:41.209 "dma_device_type": 1 00:21:41.209 }, 00:21:41.209 { 00:21:41.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.209 "dma_device_type": 2 00:21:41.209 } 00:21:41.209 ], 00:21:41.209 "driver_specific": {} 00:21:41.209 }' 00:21:41.209 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.209 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.209 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:41.209 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.209 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.209 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:41.209 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.209 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.467 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:41.467 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.467 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.467 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:41.467 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:41.467 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:41.467 13:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:41.725 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:41.725 "name": "BaseBdev4", 00:21:41.725 "aliases": [ 00:21:41.725 "8567dc90-7b3d-42a0-a5a4-86187d47fdaa" 00:21:41.725 ], 00:21:41.725 "product_name": "Malloc disk", 00:21:41.725 "block_size": 512, 00:21:41.725 "num_blocks": 65536, 00:21:41.725 "uuid": "8567dc90-7b3d-42a0-a5a4-86187d47fdaa", 00:21:41.725 "assigned_rate_limits": { 00:21:41.725 "rw_ios_per_sec": 0, 00:21:41.725 "rw_mbytes_per_sec": 0, 00:21:41.725 "r_mbytes_per_sec": 0, 00:21:41.725 "w_mbytes_per_sec": 0 00:21:41.725 }, 00:21:41.725 "claimed": true, 00:21:41.725 "claim_type": "exclusive_write", 00:21:41.725 "zoned": false, 00:21:41.725 "supported_io_types": { 00:21:41.725 "read": true, 00:21:41.725 "write": true, 00:21:41.725 "unmap": true, 00:21:41.725 "flush": true, 00:21:41.725 "reset": true, 00:21:41.725 "nvme_admin": false, 00:21:41.725 "nvme_io": false, 00:21:41.725 "nvme_io_md": false, 00:21:41.725 "write_zeroes": true, 00:21:41.725 "zcopy": true, 00:21:41.725 "get_zone_info": false, 00:21:41.725 "zone_management": false, 00:21:41.725 "zone_append": false, 00:21:41.725 "compare": false, 00:21:41.725 "compare_and_write": false, 00:21:41.725 "abort": true, 00:21:41.725 "seek_hole": false, 00:21:41.725 "seek_data": false, 00:21:41.725 "copy": true, 00:21:41.725 "nvme_iov_md": false 00:21:41.725 }, 00:21:41.725 "memory_domains": [ 00:21:41.725 { 00:21:41.725 "dma_device_id": "system", 00:21:41.725 "dma_device_type": 1 00:21:41.725 }, 00:21:41.725 { 00:21:41.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.725 "dma_device_type": 2 00:21:41.725 } 00:21:41.725 ], 00:21:41.725 "driver_specific": {} 00:21:41.725 }' 00:21:41.725 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.725 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.725 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:41.725 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.725 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.725 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:41.725 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.982 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.982 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:41.982 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.982 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.982 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:41.982 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:42.241 [2024-07-25 13:21:52.567821] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.241 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.499 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:42.499 "name": "Existed_Raid", 00:21:42.499 "uuid": "03905259-538a-4990-9a77-ac7d4fe341fe", 00:21:42.499 "strip_size_kb": 0, 00:21:42.499 "state": "online", 00:21:42.499 "raid_level": "raid1", 00:21:42.499 "superblock": true, 00:21:42.499 "num_base_bdevs": 4, 00:21:42.499 "num_base_bdevs_discovered": 3, 00:21:42.499 "num_base_bdevs_operational": 3, 00:21:42.499 "base_bdevs_list": [ 00:21:42.499 { 00:21:42.499 "name": null, 00:21:42.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.499 "is_configured": false, 00:21:42.499 "data_offset": 2048, 00:21:42.499 "data_size": 63488 00:21:42.499 }, 00:21:42.499 { 00:21:42.499 "name": "BaseBdev2", 00:21:42.499 "uuid": "7edf7934-c5f7-462d-9a98-e89ccd656dc3", 00:21:42.499 "is_configured": true, 00:21:42.499 "data_offset": 2048, 00:21:42.500 "data_size": 63488 00:21:42.500 }, 00:21:42.500 { 00:21:42.500 "name": "BaseBdev3", 00:21:42.500 "uuid": "f4ae11a5-6124-46c1-8230-529fae127185", 00:21:42.500 "is_configured": true, 00:21:42.500 "data_offset": 2048, 00:21:42.500 "data_size": 63488 00:21:42.500 }, 00:21:42.500 { 00:21:42.500 "name": "BaseBdev4", 00:21:42.500 "uuid": "8567dc90-7b3d-42a0-a5a4-86187d47fdaa", 00:21:42.500 "is_configured": true, 00:21:42.500 "data_offset": 2048, 00:21:42.500 "data_size": 63488 00:21:42.500 } 00:21:42.500 ] 00:21:42.500 }' 00:21:42.500 13:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:42.500 13:21:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.065 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:43.065 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:43.065 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:43.065 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.323 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:43.323 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:43.323 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:43.581 [2024-07-25 13:21:53.820217] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:43.581 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:43.581 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:43.581 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.581 13:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:43.839 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:43.839 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:43.839 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:43.839 [2024-07-25 13:21:54.283525] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:43.839 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:43.839 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:43.839 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.839 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:44.097 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:44.097 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:44.097 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:44.356 [2024-07-25 13:21:54.738699] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:44.356 [2024-07-25 13:21:54.738773] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:44.356 [2024-07-25 13:21:54.748773] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.356 [2024-07-25 13:21:54.748800] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.356 [2024-07-25 13:21:54.748810] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bc8840 name Existed_Raid, state offline 00:21:44.356 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:44.356 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:44.356 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.356 13:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:44.614 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:44.614 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:44.614 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:44.614 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:44.614 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:44.614 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:44.873 BaseBdev2 00:21:44.873 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:44.873 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:44.873 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:44.873 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:44.873 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:44.873 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:44.873 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:45.132 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:45.390 [ 00:21:45.390 { 00:21:45.390 "name": "BaseBdev2", 00:21:45.390 "aliases": [ 00:21:45.390 "ea489f39-a95f-4283-8795-069ee4764a92" 00:21:45.390 ], 00:21:45.390 "product_name": "Malloc disk", 00:21:45.390 "block_size": 512, 00:21:45.391 "num_blocks": 65536, 00:21:45.391 "uuid": "ea489f39-a95f-4283-8795-069ee4764a92", 00:21:45.391 "assigned_rate_limits": { 00:21:45.391 "rw_ios_per_sec": 0, 00:21:45.391 "rw_mbytes_per_sec": 0, 00:21:45.391 "r_mbytes_per_sec": 0, 00:21:45.391 "w_mbytes_per_sec": 0 00:21:45.391 }, 00:21:45.391 "claimed": false, 00:21:45.391 "zoned": false, 00:21:45.391 "supported_io_types": { 00:21:45.391 "read": true, 00:21:45.391 "write": true, 00:21:45.391 "unmap": true, 00:21:45.391 "flush": true, 00:21:45.391 "reset": true, 00:21:45.391 "nvme_admin": false, 00:21:45.391 "nvme_io": false, 00:21:45.391 "nvme_io_md": false, 00:21:45.391 "write_zeroes": true, 00:21:45.391 "zcopy": true, 00:21:45.391 "get_zone_info": false, 00:21:45.391 "zone_management": false, 00:21:45.391 "zone_append": false, 00:21:45.391 "compare": false, 00:21:45.391 "compare_and_write": false, 00:21:45.391 "abort": true, 00:21:45.391 "seek_hole": false, 00:21:45.391 "seek_data": false, 00:21:45.391 "copy": true, 00:21:45.391 "nvme_iov_md": false 00:21:45.391 }, 00:21:45.391 "memory_domains": [ 00:21:45.391 { 00:21:45.391 "dma_device_id": "system", 00:21:45.391 "dma_device_type": 1 00:21:45.391 }, 00:21:45.391 { 00:21:45.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.391 "dma_device_type": 2 00:21:45.391 } 00:21:45.391 ], 00:21:45.391 "driver_specific": {} 00:21:45.391 } 00:21:45.391 ] 00:21:45.391 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:45.391 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:45.391 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:45.391 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:45.649 BaseBdev3 00:21:45.649 13:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:45.649 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:45.649 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:45.649 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:45.649 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:45.649 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:45.649 13:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:45.649 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:45.908 [ 00:21:45.908 { 00:21:45.908 "name": "BaseBdev3", 00:21:45.908 "aliases": [ 00:21:45.908 "cfe54427-07dd-4eb0-b3be-214884b56d88" 00:21:45.908 ], 00:21:45.908 "product_name": "Malloc disk", 00:21:45.908 "block_size": 512, 00:21:45.908 "num_blocks": 65536, 00:21:45.908 "uuid": "cfe54427-07dd-4eb0-b3be-214884b56d88", 00:21:45.908 "assigned_rate_limits": { 00:21:45.908 "rw_ios_per_sec": 0, 00:21:45.908 "rw_mbytes_per_sec": 0, 00:21:45.908 "r_mbytes_per_sec": 0, 00:21:45.908 "w_mbytes_per_sec": 0 00:21:45.908 }, 00:21:45.908 "claimed": false, 00:21:45.908 "zoned": false, 00:21:45.908 "supported_io_types": { 00:21:45.908 "read": true, 00:21:45.908 "write": true, 00:21:45.908 "unmap": true, 00:21:45.908 "flush": true, 00:21:45.908 "reset": true, 00:21:45.908 "nvme_admin": false, 00:21:45.908 "nvme_io": false, 00:21:45.908 "nvme_io_md": false, 00:21:45.908 "write_zeroes": true, 00:21:45.908 "zcopy": true, 00:21:45.908 "get_zone_info": false, 00:21:45.908 "zone_management": false, 00:21:45.908 "zone_append": false, 00:21:45.908 "compare": false, 00:21:45.908 "compare_and_write": false, 00:21:45.908 "abort": true, 00:21:45.908 "seek_hole": false, 00:21:45.908 "seek_data": false, 00:21:45.908 "copy": true, 00:21:45.908 "nvme_iov_md": false 00:21:45.908 }, 00:21:45.908 "memory_domains": [ 00:21:45.909 { 00:21:45.909 "dma_device_id": "system", 00:21:45.909 "dma_device_type": 1 00:21:45.909 }, 00:21:45.909 { 00:21:45.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.909 "dma_device_type": 2 00:21:45.909 } 00:21:45.909 ], 00:21:45.909 "driver_specific": {} 00:21:45.909 } 00:21:45.909 ] 00:21:45.909 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:45.909 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:45.909 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:45.909 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:46.167 BaseBdev4 00:21:46.167 13:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:46.167 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:46.167 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:46.167 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:46.167 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:46.167 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:46.167 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:46.426 13:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:46.685 [ 00:21:46.685 { 00:21:46.685 "name": "BaseBdev4", 00:21:46.685 "aliases": [ 00:21:46.685 "a8a99729-9048-4ee2-b09c-adfbbdd2d44d" 00:21:46.685 ], 00:21:46.685 "product_name": "Malloc disk", 00:21:46.685 "block_size": 512, 00:21:46.685 "num_blocks": 65536, 00:21:46.685 "uuid": "a8a99729-9048-4ee2-b09c-adfbbdd2d44d", 00:21:46.685 "assigned_rate_limits": { 00:21:46.685 "rw_ios_per_sec": 0, 00:21:46.685 "rw_mbytes_per_sec": 0, 00:21:46.685 "r_mbytes_per_sec": 0, 00:21:46.685 "w_mbytes_per_sec": 0 00:21:46.685 }, 00:21:46.685 "claimed": false, 00:21:46.685 "zoned": false, 00:21:46.685 "supported_io_types": { 00:21:46.685 "read": true, 00:21:46.685 "write": true, 00:21:46.685 "unmap": true, 00:21:46.685 "flush": true, 00:21:46.685 "reset": true, 00:21:46.685 "nvme_admin": false, 00:21:46.685 "nvme_io": false, 00:21:46.685 "nvme_io_md": false, 00:21:46.685 "write_zeroes": true, 00:21:46.685 "zcopy": true, 00:21:46.685 "get_zone_info": false, 00:21:46.685 "zone_management": false, 00:21:46.685 "zone_append": false, 00:21:46.685 "compare": false, 00:21:46.685 "compare_and_write": false, 00:21:46.685 "abort": true, 00:21:46.685 "seek_hole": false, 00:21:46.685 "seek_data": false, 00:21:46.685 "copy": true, 00:21:46.685 "nvme_iov_md": false 00:21:46.685 }, 00:21:46.685 "memory_domains": [ 00:21:46.685 { 00:21:46.685 "dma_device_id": "system", 00:21:46.685 "dma_device_type": 1 00:21:46.685 }, 00:21:46.685 { 00:21:46.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.685 "dma_device_type": 2 00:21:46.685 } 00:21:46.685 ], 00:21:46.685 "driver_specific": {} 00:21:46.685 } 00:21:46.685 ] 00:21:46.685 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:46.685 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:46.685 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:46.685 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:46.944 [2024-07-25 13:21:57.235844] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:46.944 [2024-07-25 13:21:57.235880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:46.944 [2024-07-25 13:21:57.235896] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:46.944 [2024-07-25 13:21:57.237110] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:46.944 [2024-07-25 13:21:57.237156] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:46.944 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:46.944 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:46.944 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:46.944 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:46.944 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:46.944 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:46.944 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.944 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.944 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.944 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.944 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.944 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.203 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.203 "name": "Existed_Raid", 00:21:47.203 "uuid": "cd17127b-00f2-4b05-9958-81858189c652", 00:21:47.203 "strip_size_kb": 0, 00:21:47.203 "state": "configuring", 00:21:47.203 "raid_level": "raid1", 00:21:47.203 "superblock": true, 00:21:47.203 "num_base_bdevs": 4, 00:21:47.203 "num_base_bdevs_discovered": 3, 00:21:47.203 "num_base_bdevs_operational": 4, 00:21:47.203 "base_bdevs_list": [ 00:21:47.203 { 00:21:47.203 "name": "BaseBdev1", 00:21:47.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.203 "is_configured": false, 00:21:47.203 "data_offset": 0, 00:21:47.203 "data_size": 0 00:21:47.203 }, 00:21:47.203 { 00:21:47.203 "name": "BaseBdev2", 00:21:47.203 "uuid": "ea489f39-a95f-4283-8795-069ee4764a92", 00:21:47.203 "is_configured": true, 00:21:47.203 "data_offset": 2048, 00:21:47.203 "data_size": 63488 00:21:47.203 }, 00:21:47.203 { 00:21:47.203 "name": "BaseBdev3", 00:21:47.203 "uuid": "cfe54427-07dd-4eb0-b3be-214884b56d88", 00:21:47.203 "is_configured": true, 00:21:47.203 "data_offset": 2048, 00:21:47.203 "data_size": 63488 00:21:47.203 }, 00:21:47.203 { 00:21:47.203 "name": "BaseBdev4", 00:21:47.203 "uuid": "a8a99729-9048-4ee2-b09c-adfbbdd2d44d", 00:21:47.203 "is_configured": true, 00:21:47.203 "data_offset": 2048, 00:21:47.203 "data_size": 63488 00:21:47.203 } 00:21:47.203 ] 00:21:47.203 }' 00:21:47.203 13:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.203 13:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.771 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:48.030 [2024-07-25 13:21:58.262542] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.030 "name": "Existed_Raid", 00:21:48.030 "uuid": "cd17127b-00f2-4b05-9958-81858189c652", 00:21:48.030 "strip_size_kb": 0, 00:21:48.030 "state": "configuring", 00:21:48.030 "raid_level": "raid1", 00:21:48.030 "superblock": true, 00:21:48.030 "num_base_bdevs": 4, 00:21:48.030 "num_base_bdevs_discovered": 2, 00:21:48.030 "num_base_bdevs_operational": 4, 00:21:48.030 "base_bdevs_list": [ 00:21:48.030 { 00:21:48.030 "name": "BaseBdev1", 00:21:48.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.030 "is_configured": false, 00:21:48.030 "data_offset": 0, 00:21:48.030 "data_size": 0 00:21:48.030 }, 00:21:48.030 { 00:21:48.030 "name": null, 00:21:48.030 "uuid": "ea489f39-a95f-4283-8795-069ee4764a92", 00:21:48.030 "is_configured": false, 00:21:48.030 "data_offset": 2048, 00:21:48.030 "data_size": 63488 00:21:48.030 }, 00:21:48.030 { 00:21:48.030 "name": "BaseBdev3", 00:21:48.030 "uuid": "cfe54427-07dd-4eb0-b3be-214884b56d88", 00:21:48.030 "is_configured": true, 00:21:48.030 "data_offset": 2048, 00:21:48.030 "data_size": 63488 00:21:48.030 }, 00:21:48.030 { 00:21:48.030 "name": "BaseBdev4", 00:21:48.030 "uuid": "a8a99729-9048-4ee2-b09c-adfbbdd2d44d", 00:21:48.030 "is_configured": true, 00:21:48.030 "data_offset": 2048, 00:21:48.030 "data_size": 63488 00:21:48.030 } 00:21:48.030 ] 00:21:48.030 }' 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.030 13:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.598 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.598 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:48.857 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:48.857 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:49.116 [2024-07-25 13:21:59.492874] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:49.116 BaseBdev1 00:21:49.116 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:49.116 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:49.116 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:49.116 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:49.116 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:49.116 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:49.116 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:49.375 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:49.632 [ 00:21:49.632 { 00:21:49.632 "name": "BaseBdev1", 00:21:49.632 "aliases": [ 00:21:49.632 "5b164641-b110-4f0a-88d5-cb09691549f7" 00:21:49.632 ], 00:21:49.632 "product_name": "Malloc disk", 00:21:49.632 "block_size": 512, 00:21:49.632 "num_blocks": 65536, 00:21:49.632 "uuid": "5b164641-b110-4f0a-88d5-cb09691549f7", 00:21:49.632 "assigned_rate_limits": { 00:21:49.632 "rw_ios_per_sec": 0, 00:21:49.632 "rw_mbytes_per_sec": 0, 00:21:49.632 "r_mbytes_per_sec": 0, 00:21:49.632 "w_mbytes_per_sec": 0 00:21:49.632 }, 00:21:49.632 "claimed": true, 00:21:49.632 "claim_type": "exclusive_write", 00:21:49.632 "zoned": false, 00:21:49.632 "supported_io_types": { 00:21:49.632 "read": true, 00:21:49.632 "write": true, 00:21:49.632 "unmap": true, 00:21:49.632 "flush": true, 00:21:49.632 "reset": true, 00:21:49.632 "nvme_admin": false, 00:21:49.632 "nvme_io": false, 00:21:49.632 "nvme_io_md": false, 00:21:49.632 "write_zeroes": true, 00:21:49.632 "zcopy": true, 00:21:49.632 "get_zone_info": false, 00:21:49.632 "zone_management": false, 00:21:49.632 "zone_append": false, 00:21:49.632 "compare": false, 00:21:49.632 "compare_and_write": false, 00:21:49.632 "abort": true, 00:21:49.632 "seek_hole": false, 00:21:49.632 "seek_data": false, 00:21:49.632 "copy": true, 00:21:49.632 "nvme_iov_md": false 00:21:49.632 }, 00:21:49.632 "memory_domains": [ 00:21:49.632 { 00:21:49.632 "dma_device_id": "system", 00:21:49.632 "dma_device_type": 1 00:21:49.632 }, 00:21:49.632 { 00:21:49.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.632 "dma_device_type": 2 00:21:49.632 } 00:21:49.632 ], 00:21:49.632 "driver_specific": {} 00:21:49.632 } 00:21:49.632 ] 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.632 13:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.890 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.890 "name": "Existed_Raid", 00:21:49.890 "uuid": "cd17127b-00f2-4b05-9958-81858189c652", 00:21:49.890 "strip_size_kb": 0, 00:21:49.890 "state": "configuring", 00:21:49.890 "raid_level": "raid1", 00:21:49.890 "superblock": true, 00:21:49.890 "num_base_bdevs": 4, 00:21:49.890 "num_base_bdevs_discovered": 3, 00:21:49.890 "num_base_bdevs_operational": 4, 00:21:49.890 "base_bdevs_list": [ 00:21:49.890 { 00:21:49.890 "name": "BaseBdev1", 00:21:49.890 "uuid": "5b164641-b110-4f0a-88d5-cb09691549f7", 00:21:49.890 "is_configured": true, 00:21:49.890 "data_offset": 2048, 00:21:49.890 "data_size": 63488 00:21:49.890 }, 00:21:49.890 { 00:21:49.890 "name": null, 00:21:49.890 "uuid": "ea489f39-a95f-4283-8795-069ee4764a92", 00:21:49.890 "is_configured": false, 00:21:49.890 "data_offset": 2048, 00:21:49.890 "data_size": 63488 00:21:49.890 }, 00:21:49.890 { 00:21:49.890 "name": "BaseBdev3", 00:21:49.890 "uuid": "cfe54427-07dd-4eb0-b3be-214884b56d88", 00:21:49.890 "is_configured": true, 00:21:49.890 "data_offset": 2048, 00:21:49.890 "data_size": 63488 00:21:49.890 }, 00:21:49.890 { 00:21:49.890 "name": "BaseBdev4", 00:21:49.890 "uuid": "a8a99729-9048-4ee2-b09c-adfbbdd2d44d", 00:21:49.890 "is_configured": true, 00:21:49.890 "data_offset": 2048, 00:21:49.890 "data_size": 63488 00:21:49.890 } 00:21:49.890 ] 00:21:49.890 }' 00:21:49.890 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.890 13:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.456 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.456 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:50.714 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:50.714 13:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:50.714 [2024-07-25 13:22:01.181471] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:50.714 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:50.714 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:50.714 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:50.714 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:50.714 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:50.714 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:50.714 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:50.714 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:50.714 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:50.714 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:50.972 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.972 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.972 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:50.972 "name": "Existed_Raid", 00:21:50.972 "uuid": "cd17127b-00f2-4b05-9958-81858189c652", 00:21:50.972 "strip_size_kb": 0, 00:21:50.972 "state": "configuring", 00:21:50.972 "raid_level": "raid1", 00:21:50.972 "superblock": true, 00:21:50.972 "num_base_bdevs": 4, 00:21:50.972 "num_base_bdevs_discovered": 2, 00:21:50.972 "num_base_bdevs_operational": 4, 00:21:50.972 "base_bdevs_list": [ 00:21:50.972 { 00:21:50.972 "name": "BaseBdev1", 00:21:50.972 "uuid": "5b164641-b110-4f0a-88d5-cb09691549f7", 00:21:50.972 "is_configured": true, 00:21:50.972 "data_offset": 2048, 00:21:50.972 "data_size": 63488 00:21:50.972 }, 00:21:50.972 { 00:21:50.972 "name": null, 00:21:50.972 "uuid": "ea489f39-a95f-4283-8795-069ee4764a92", 00:21:50.972 "is_configured": false, 00:21:50.972 "data_offset": 2048, 00:21:50.972 "data_size": 63488 00:21:50.972 }, 00:21:50.972 { 00:21:50.972 "name": null, 00:21:50.972 "uuid": "cfe54427-07dd-4eb0-b3be-214884b56d88", 00:21:50.972 "is_configured": false, 00:21:50.972 "data_offset": 2048, 00:21:50.972 "data_size": 63488 00:21:50.972 }, 00:21:50.972 { 00:21:50.972 "name": "BaseBdev4", 00:21:50.972 "uuid": "a8a99729-9048-4ee2-b09c-adfbbdd2d44d", 00:21:50.972 "is_configured": true, 00:21:50.972 "data_offset": 2048, 00:21:50.972 "data_size": 63488 00:21:50.972 } 00:21:50.972 ] 00:21:50.973 }' 00:21:50.973 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:50.973 13:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.540 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:51.540 13:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.799 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:51.799 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:52.060 [2024-07-25 13:22:02.364610] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:52.060 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:52.060 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:52.060 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:52.060 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:52.060 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:52.060 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:52.060 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:52.060 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:52.060 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:52.060 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:52.060 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.060 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.362 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:52.362 "name": "Existed_Raid", 00:21:52.362 "uuid": "cd17127b-00f2-4b05-9958-81858189c652", 00:21:52.362 "strip_size_kb": 0, 00:21:52.362 "state": "configuring", 00:21:52.362 "raid_level": "raid1", 00:21:52.362 "superblock": true, 00:21:52.362 "num_base_bdevs": 4, 00:21:52.362 "num_base_bdevs_discovered": 3, 00:21:52.362 "num_base_bdevs_operational": 4, 00:21:52.362 "base_bdevs_list": [ 00:21:52.362 { 00:21:52.362 "name": "BaseBdev1", 00:21:52.362 "uuid": "5b164641-b110-4f0a-88d5-cb09691549f7", 00:21:52.362 "is_configured": true, 00:21:52.362 "data_offset": 2048, 00:21:52.362 "data_size": 63488 00:21:52.362 }, 00:21:52.362 { 00:21:52.362 "name": null, 00:21:52.362 "uuid": "ea489f39-a95f-4283-8795-069ee4764a92", 00:21:52.362 "is_configured": false, 00:21:52.362 "data_offset": 2048, 00:21:52.362 "data_size": 63488 00:21:52.362 }, 00:21:52.362 { 00:21:52.362 "name": "BaseBdev3", 00:21:52.362 "uuid": "cfe54427-07dd-4eb0-b3be-214884b56d88", 00:21:52.362 "is_configured": true, 00:21:52.362 "data_offset": 2048, 00:21:52.362 "data_size": 63488 00:21:52.362 }, 00:21:52.362 { 00:21:52.362 "name": "BaseBdev4", 00:21:52.362 "uuid": "a8a99729-9048-4ee2-b09c-adfbbdd2d44d", 00:21:52.362 "is_configured": true, 00:21:52.362 "data_offset": 2048, 00:21:52.362 "data_size": 63488 00:21:52.362 } 00:21:52.362 ] 00:21:52.362 }' 00:21:52.362 13:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:52.362 13:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.929 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:52.929 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:53.188 [2024-07-25 13:22:03.635978] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.188 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.447 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.447 "name": "Existed_Raid", 00:21:53.447 "uuid": "cd17127b-00f2-4b05-9958-81858189c652", 00:21:53.447 "strip_size_kb": 0, 00:21:53.447 "state": "configuring", 00:21:53.447 "raid_level": "raid1", 00:21:53.447 "superblock": true, 00:21:53.447 "num_base_bdevs": 4, 00:21:53.447 "num_base_bdevs_discovered": 2, 00:21:53.447 "num_base_bdevs_operational": 4, 00:21:53.447 "base_bdevs_list": [ 00:21:53.447 { 00:21:53.447 "name": null, 00:21:53.447 "uuid": "5b164641-b110-4f0a-88d5-cb09691549f7", 00:21:53.447 "is_configured": false, 00:21:53.447 "data_offset": 2048, 00:21:53.447 "data_size": 63488 00:21:53.447 }, 00:21:53.447 { 00:21:53.447 "name": null, 00:21:53.447 "uuid": "ea489f39-a95f-4283-8795-069ee4764a92", 00:21:53.447 "is_configured": false, 00:21:53.447 "data_offset": 2048, 00:21:53.447 "data_size": 63488 00:21:53.447 }, 00:21:53.447 { 00:21:53.447 "name": "BaseBdev3", 00:21:53.447 "uuid": "cfe54427-07dd-4eb0-b3be-214884b56d88", 00:21:53.447 "is_configured": true, 00:21:53.447 "data_offset": 2048, 00:21:53.447 "data_size": 63488 00:21:53.447 }, 00:21:53.447 { 00:21:53.447 "name": "BaseBdev4", 00:21:53.447 "uuid": "a8a99729-9048-4ee2-b09c-adfbbdd2d44d", 00:21:53.447 "is_configured": true, 00:21:53.447 "data_offset": 2048, 00:21:53.447 "data_size": 63488 00:21:53.447 } 00:21:53.447 ] 00:21:53.447 }' 00:21:53.447 13:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.447 13:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:54.014 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.014 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:54.272 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:54.272 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:54.531 [2024-07-25 13:22:04.901273] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:54.531 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:54.531 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:54.531 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:54.531 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:54.531 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:54.531 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:54.531 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.531 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.531 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.531 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.531 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.531 13:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.790 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:54.790 "name": "Existed_Raid", 00:21:54.790 "uuid": "cd17127b-00f2-4b05-9958-81858189c652", 00:21:54.790 "strip_size_kb": 0, 00:21:54.790 "state": "configuring", 00:21:54.790 "raid_level": "raid1", 00:21:54.790 "superblock": true, 00:21:54.790 "num_base_bdevs": 4, 00:21:54.790 "num_base_bdevs_discovered": 3, 00:21:54.790 "num_base_bdevs_operational": 4, 00:21:54.790 "base_bdevs_list": [ 00:21:54.790 { 00:21:54.790 "name": null, 00:21:54.790 "uuid": "5b164641-b110-4f0a-88d5-cb09691549f7", 00:21:54.790 "is_configured": false, 00:21:54.790 "data_offset": 2048, 00:21:54.790 "data_size": 63488 00:21:54.790 }, 00:21:54.790 { 00:21:54.790 "name": "BaseBdev2", 00:21:54.790 "uuid": "ea489f39-a95f-4283-8795-069ee4764a92", 00:21:54.790 "is_configured": true, 00:21:54.790 "data_offset": 2048, 00:21:54.790 "data_size": 63488 00:21:54.790 }, 00:21:54.790 { 00:21:54.790 "name": "BaseBdev3", 00:21:54.790 "uuid": "cfe54427-07dd-4eb0-b3be-214884b56d88", 00:21:54.790 "is_configured": true, 00:21:54.790 "data_offset": 2048, 00:21:54.790 "data_size": 63488 00:21:54.790 }, 00:21:54.790 { 00:21:54.790 "name": "BaseBdev4", 00:21:54.790 "uuid": "a8a99729-9048-4ee2-b09c-adfbbdd2d44d", 00:21:54.790 "is_configured": true, 00:21:54.790 "data_offset": 2048, 00:21:54.790 "data_size": 63488 00:21:54.790 } 00:21:54.790 ] 00:21:54.790 }' 00:21:54.790 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:54.790 13:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:55.357 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.357 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:55.616 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:55.616 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.616 13:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:55.875 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5b164641-b110-4f0a-88d5-cb09691549f7 00:21:56.134 [2024-07-25 13:22:06.412499] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:56.134 [2024-07-25 13:22:06.412643] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d6ded0 00:21:56.134 [2024-07-25 13:22:06.412655] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:56.134 [2024-07-25 13:22:06.412816] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d6c9d0 00:21:56.134 [2024-07-25 13:22:06.412925] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d6ded0 00:21:56.134 [2024-07-25 13:22:06.412934] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1d6ded0 00:21:56.134 [2024-07-25 13:22:06.413017] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.134 NewBaseBdev 00:21:56.134 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:56.134 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:21:56.134 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:56.134 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:56.134 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:56.134 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:56.134 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:56.393 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:56.393 [ 00:21:56.393 { 00:21:56.393 "name": "NewBaseBdev", 00:21:56.393 "aliases": [ 00:21:56.393 "5b164641-b110-4f0a-88d5-cb09691549f7" 00:21:56.393 ], 00:21:56.393 "product_name": "Malloc disk", 00:21:56.393 "block_size": 512, 00:21:56.393 "num_blocks": 65536, 00:21:56.393 "uuid": "5b164641-b110-4f0a-88d5-cb09691549f7", 00:21:56.393 "assigned_rate_limits": { 00:21:56.393 "rw_ios_per_sec": 0, 00:21:56.393 "rw_mbytes_per_sec": 0, 00:21:56.393 "r_mbytes_per_sec": 0, 00:21:56.393 "w_mbytes_per_sec": 0 00:21:56.393 }, 00:21:56.393 "claimed": true, 00:21:56.393 "claim_type": "exclusive_write", 00:21:56.393 "zoned": false, 00:21:56.393 "supported_io_types": { 00:21:56.393 "read": true, 00:21:56.393 "write": true, 00:21:56.393 "unmap": true, 00:21:56.393 "flush": true, 00:21:56.393 "reset": true, 00:21:56.393 "nvme_admin": false, 00:21:56.393 "nvme_io": false, 00:21:56.393 "nvme_io_md": false, 00:21:56.393 "write_zeroes": true, 00:21:56.393 "zcopy": true, 00:21:56.393 "get_zone_info": false, 00:21:56.394 "zone_management": false, 00:21:56.394 "zone_append": false, 00:21:56.394 "compare": false, 00:21:56.394 "compare_and_write": false, 00:21:56.394 "abort": true, 00:21:56.394 "seek_hole": false, 00:21:56.394 "seek_data": false, 00:21:56.394 "copy": true, 00:21:56.394 "nvme_iov_md": false 00:21:56.394 }, 00:21:56.394 "memory_domains": [ 00:21:56.394 { 00:21:56.394 "dma_device_id": "system", 00:21:56.394 "dma_device_type": 1 00:21:56.394 }, 00:21:56.394 { 00:21:56.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.394 "dma_device_type": 2 00:21:56.394 } 00:21:56.394 ], 00:21:56.394 "driver_specific": {} 00:21:56.394 } 00:21:56.394 ] 00:21:56.394 13:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:56.394 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:56.394 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:56.394 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:56.394 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:56.394 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:56.394 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:56.653 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.653 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.653 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.653 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.653 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.653 13:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.653 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.653 "name": "Existed_Raid", 00:21:56.653 "uuid": "cd17127b-00f2-4b05-9958-81858189c652", 00:21:56.653 "strip_size_kb": 0, 00:21:56.653 "state": "online", 00:21:56.653 "raid_level": "raid1", 00:21:56.653 "superblock": true, 00:21:56.653 "num_base_bdevs": 4, 00:21:56.653 "num_base_bdevs_discovered": 4, 00:21:56.653 "num_base_bdevs_operational": 4, 00:21:56.653 "base_bdevs_list": [ 00:21:56.653 { 00:21:56.653 "name": "NewBaseBdev", 00:21:56.653 "uuid": "5b164641-b110-4f0a-88d5-cb09691549f7", 00:21:56.653 "is_configured": true, 00:21:56.653 "data_offset": 2048, 00:21:56.653 "data_size": 63488 00:21:56.653 }, 00:21:56.653 { 00:21:56.653 "name": "BaseBdev2", 00:21:56.653 "uuid": "ea489f39-a95f-4283-8795-069ee4764a92", 00:21:56.653 "is_configured": true, 00:21:56.653 "data_offset": 2048, 00:21:56.653 "data_size": 63488 00:21:56.653 }, 00:21:56.653 { 00:21:56.653 "name": "BaseBdev3", 00:21:56.653 "uuid": "cfe54427-07dd-4eb0-b3be-214884b56d88", 00:21:56.653 "is_configured": true, 00:21:56.653 "data_offset": 2048, 00:21:56.653 "data_size": 63488 00:21:56.653 }, 00:21:56.653 { 00:21:56.653 "name": "BaseBdev4", 00:21:56.653 "uuid": "a8a99729-9048-4ee2-b09c-adfbbdd2d44d", 00:21:56.653 "is_configured": true, 00:21:56.653 "data_offset": 2048, 00:21:56.653 "data_size": 63488 00:21:56.653 } 00:21:56.653 ] 00:21:56.653 }' 00:21:56.653 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.653 13:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:57.590 [2024-07-25 13:22:07.916781] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:57.590 "name": "Existed_Raid", 00:21:57.590 "aliases": [ 00:21:57.590 "cd17127b-00f2-4b05-9958-81858189c652" 00:21:57.590 ], 00:21:57.590 "product_name": "Raid Volume", 00:21:57.590 "block_size": 512, 00:21:57.590 "num_blocks": 63488, 00:21:57.590 "uuid": "cd17127b-00f2-4b05-9958-81858189c652", 00:21:57.590 "assigned_rate_limits": { 00:21:57.590 "rw_ios_per_sec": 0, 00:21:57.590 "rw_mbytes_per_sec": 0, 00:21:57.590 "r_mbytes_per_sec": 0, 00:21:57.590 "w_mbytes_per_sec": 0 00:21:57.590 }, 00:21:57.590 "claimed": false, 00:21:57.590 "zoned": false, 00:21:57.590 "supported_io_types": { 00:21:57.590 "read": true, 00:21:57.590 "write": true, 00:21:57.590 "unmap": false, 00:21:57.590 "flush": false, 00:21:57.590 "reset": true, 00:21:57.590 "nvme_admin": false, 00:21:57.590 "nvme_io": false, 00:21:57.590 "nvme_io_md": false, 00:21:57.590 "write_zeroes": true, 00:21:57.590 "zcopy": false, 00:21:57.590 "get_zone_info": false, 00:21:57.590 "zone_management": false, 00:21:57.590 "zone_append": false, 00:21:57.590 "compare": false, 00:21:57.590 "compare_and_write": false, 00:21:57.590 "abort": false, 00:21:57.590 "seek_hole": false, 00:21:57.590 "seek_data": false, 00:21:57.590 "copy": false, 00:21:57.590 "nvme_iov_md": false 00:21:57.590 }, 00:21:57.590 "memory_domains": [ 00:21:57.590 { 00:21:57.590 "dma_device_id": "system", 00:21:57.590 "dma_device_type": 1 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.590 "dma_device_type": 2 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "dma_device_id": "system", 00:21:57.590 "dma_device_type": 1 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.590 "dma_device_type": 2 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "dma_device_id": "system", 00:21:57.590 "dma_device_type": 1 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.590 "dma_device_type": 2 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "dma_device_id": "system", 00:21:57.590 "dma_device_type": 1 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.590 "dma_device_type": 2 00:21:57.590 } 00:21:57.590 ], 00:21:57.590 "driver_specific": { 00:21:57.590 "raid": { 00:21:57.590 "uuid": "cd17127b-00f2-4b05-9958-81858189c652", 00:21:57.590 "strip_size_kb": 0, 00:21:57.590 "state": "online", 00:21:57.590 "raid_level": "raid1", 00:21:57.590 "superblock": true, 00:21:57.590 "num_base_bdevs": 4, 00:21:57.590 "num_base_bdevs_discovered": 4, 00:21:57.590 "num_base_bdevs_operational": 4, 00:21:57.590 "base_bdevs_list": [ 00:21:57.590 { 00:21:57.590 "name": "NewBaseBdev", 00:21:57.590 "uuid": "5b164641-b110-4f0a-88d5-cb09691549f7", 00:21:57.590 "is_configured": true, 00:21:57.590 "data_offset": 2048, 00:21:57.590 "data_size": 63488 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "name": "BaseBdev2", 00:21:57.590 "uuid": "ea489f39-a95f-4283-8795-069ee4764a92", 00:21:57.590 "is_configured": true, 00:21:57.590 "data_offset": 2048, 00:21:57.590 "data_size": 63488 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "name": "BaseBdev3", 00:21:57.590 "uuid": "cfe54427-07dd-4eb0-b3be-214884b56d88", 00:21:57.590 "is_configured": true, 00:21:57.590 "data_offset": 2048, 00:21:57.590 "data_size": 63488 00:21:57.590 }, 00:21:57.590 { 00:21:57.590 "name": "BaseBdev4", 00:21:57.590 "uuid": "a8a99729-9048-4ee2-b09c-adfbbdd2d44d", 00:21:57.590 "is_configured": true, 00:21:57.590 "data_offset": 2048, 00:21:57.590 "data_size": 63488 00:21:57.590 } 00:21:57.590 ] 00:21:57.590 } 00:21:57.590 } 00:21:57.590 }' 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:57.590 BaseBdev2 00:21:57.590 BaseBdev3 00:21:57.590 BaseBdev4' 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:57.590 13:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:57.849 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:57.849 "name": "NewBaseBdev", 00:21:57.849 "aliases": [ 00:21:57.849 "5b164641-b110-4f0a-88d5-cb09691549f7" 00:21:57.849 ], 00:21:57.849 "product_name": "Malloc disk", 00:21:57.849 "block_size": 512, 00:21:57.849 "num_blocks": 65536, 00:21:57.849 "uuid": "5b164641-b110-4f0a-88d5-cb09691549f7", 00:21:57.849 "assigned_rate_limits": { 00:21:57.849 "rw_ios_per_sec": 0, 00:21:57.849 "rw_mbytes_per_sec": 0, 00:21:57.849 "r_mbytes_per_sec": 0, 00:21:57.849 "w_mbytes_per_sec": 0 00:21:57.849 }, 00:21:57.849 "claimed": true, 00:21:57.849 "claim_type": "exclusive_write", 00:21:57.849 "zoned": false, 00:21:57.849 "supported_io_types": { 00:21:57.849 "read": true, 00:21:57.849 "write": true, 00:21:57.849 "unmap": true, 00:21:57.849 "flush": true, 00:21:57.849 "reset": true, 00:21:57.849 "nvme_admin": false, 00:21:57.849 "nvme_io": false, 00:21:57.849 "nvme_io_md": false, 00:21:57.849 "write_zeroes": true, 00:21:57.849 "zcopy": true, 00:21:57.849 "get_zone_info": false, 00:21:57.849 "zone_management": false, 00:21:57.849 "zone_append": false, 00:21:57.849 "compare": false, 00:21:57.849 "compare_and_write": false, 00:21:57.849 "abort": true, 00:21:57.849 "seek_hole": false, 00:21:57.849 "seek_data": false, 00:21:57.849 "copy": true, 00:21:57.849 "nvme_iov_md": false 00:21:57.849 }, 00:21:57.849 "memory_domains": [ 00:21:57.849 { 00:21:57.849 "dma_device_id": "system", 00:21:57.849 "dma_device_type": 1 00:21:57.849 }, 00:21:57.849 { 00:21:57.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.849 "dma_device_type": 2 00:21:57.849 } 00:21:57.849 ], 00:21:57.849 "driver_specific": {} 00:21:57.849 }' 00:21:57.849 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:57.849 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:57.849 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:57.849 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:57.849 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.108 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:58.108 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.108 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.108 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:58.108 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.108 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.108 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:58.108 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:58.108 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:58.108 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:58.367 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:58.367 "name": "BaseBdev2", 00:21:58.367 "aliases": [ 00:21:58.367 "ea489f39-a95f-4283-8795-069ee4764a92" 00:21:58.367 ], 00:21:58.367 "product_name": "Malloc disk", 00:21:58.367 "block_size": 512, 00:21:58.367 "num_blocks": 65536, 00:21:58.367 "uuid": "ea489f39-a95f-4283-8795-069ee4764a92", 00:21:58.367 "assigned_rate_limits": { 00:21:58.367 "rw_ios_per_sec": 0, 00:21:58.367 "rw_mbytes_per_sec": 0, 00:21:58.367 "r_mbytes_per_sec": 0, 00:21:58.367 "w_mbytes_per_sec": 0 00:21:58.367 }, 00:21:58.367 "claimed": true, 00:21:58.367 "claim_type": "exclusive_write", 00:21:58.367 "zoned": false, 00:21:58.367 "supported_io_types": { 00:21:58.367 "read": true, 00:21:58.367 "write": true, 00:21:58.367 "unmap": true, 00:21:58.367 "flush": true, 00:21:58.367 "reset": true, 00:21:58.367 "nvme_admin": false, 00:21:58.367 "nvme_io": false, 00:21:58.367 "nvme_io_md": false, 00:21:58.367 "write_zeroes": true, 00:21:58.367 "zcopy": true, 00:21:58.367 "get_zone_info": false, 00:21:58.367 "zone_management": false, 00:21:58.367 "zone_append": false, 00:21:58.367 "compare": false, 00:21:58.367 "compare_and_write": false, 00:21:58.367 "abort": true, 00:21:58.367 "seek_hole": false, 00:21:58.367 "seek_data": false, 00:21:58.367 "copy": true, 00:21:58.367 "nvme_iov_md": false 00:21:58.367 }, 00:21:58.367 "memory_domains": [ 00:21:58.367 { 00:21:58.367 "dma_device_id": "system", 00:21:58.367 "dma_device_type": 1 00:21:58.367 }, 00:21:58.367 { 00:21:58.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.367 "dma_device_type": 2 00:21:58.367 } 00:21:58.367 ], 00:21:58.367 "driver_specific": {} 00:21:58.367 }' 00:21:58.367 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.367 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.367 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:58.367 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.625 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.625 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:58.625 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.625 13:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.625 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:58.625 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.625 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.625 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:58.625 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:58.625 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:58.625 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:58.884 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:58.884 "name": "BaseBdev3", 00:21:58.884 "aliases": [ 00:21:58.884 "cfe54427-07dd-4eb0-b3be-214884b56d88" 00:21:58.884 ], 00:21:58.884 "product_name": "Malloc disk", 00:21:58.884 "block_size": 512, 00:21:58.884 "num_blocks": 65536, 00:21:58.884 "uuid": "cfe54427-07dd-4eb0-b3be-214884b56d88", 00:21:58.884 "assigned_rate_limits": { 00:21:58.884 "rw_ios_per_sec": 0, 00:21:58.884 "rw_mbytes_per_sec": 0, 00:21:58.884 "r_mbytes_per_sec": 0, 00:21:58.884 "w_mbytes_per_sec": 0 00:21:58.884 }, 00:21:58.884 "claimed": true, 00:21:58.884 "claim_type": "exclusive_write", 00:21:58.884 "zoned": false, 00:21:58.884 "supported_io_types": { 00:21:58.884 "read": true, 00:21:58.884 "write": true, 00:21:58.884 "unmap": true, 00:21:58.884 "flush": true, 00:21:58.884 "reset": true, 00:21:58.884 "nvme_admin": false, 00:21:58.884 "nvme_io": false, 00:21:58.884 "nvme_io_md": false, 00:21:58.884 "write_zeroes": true, 00:21:58.884 "zcopy": true, 00:21:58.884 "get_zone_info": false, 00:21:58.884 "zone_management": false, 00:21:58.884 "zone_append": false, 00:21:58.884 "compare": false, 00:21:58.884 "compare_and_write": false, 00:21:58.884 "abort": true, 00:21:58.884 "seek_hole": false, 00:21:58.884 "seek_data": false, 00:21:58.884 "copy": true, 00:21:58.884 "nvme_iov_md": false 00:21:58.884 }, 00:21:58.884 "memory_domains": [ 00:21:58.884 { 00:21:58.884 "dma_device_id": "system", 00:21:58.884 "dma_device_type": 1 00:21:58.884 }, 00:21:58.884 { 00:21:58.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.884 "dma_device_type": 2 00:21:58.884 } 00:21:58.884 ], 00:21:58.884 "driver_specific": {} 00:21:58.884 }' 00:21:58.884 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.143 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.143 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:59.143 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.143 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.143 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:59.143 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.143 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.143 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:59.143 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.143 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.402 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:59.402 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:59.402 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:59.402 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:59.402 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:59.402 "name": "BaseBdev4", 00:21:59.402 "aliases": [ 00:21:59.402 "a8a99729-9048-4ee2-b09c-adfbbdd2d44d" 00:21:59.402 ], 00:21:59.402 "product_name": "Malloc disk", 00:21:59.402 "block_size": 512, 00:21:59.402 "num_blocks": 65536, 00:21:59.402 "uuid": "a8a99729-9048-4ee2-b09c-adfbbdd2d44d", 00:21:59.402 "assigned_rate_limits": { 00:21:59.402 "rw_ios_per_sec": 0, 00:21:59.402 "rw_mbytes_per_sec": 0, 00:21:59.402 "r_mbytes_per_sec": 0, 00:21:59.402 "w_mbytes_per_sec": 0 00:21:59.402 }, 00:21:59.402 "claimed": true, 00:21:59.402 "claim_type": "exclusive_write", 00:21:59.402 "zoned": false, 00:21:59.402 "supported_io_types": { 00:21:59.402 "read": true, 00:21:59.402 "write": true, 00:21:59.402 "unmap": true, 00:21:59.402 "flush": true, 00:21:59.402 "reset": true, 00:21:59.402 "nvme_admin": false, 00:21:59.402 "nvme_io": false, 00:21:59.402 "nvme_io_md": false, 00:21:59.402 "write_zeroes": true, 00:21:59.402 "zcopy": true, 00:21:59.402 "get_zone_info": false, 00:21:59.402 "zone_management": false, 00:21:59.402 "zone_append": false, 00:21:59.402 "compare": false, 00:21:59.402 "compare_and_write": false, 00:21:59.402 "abort": true, 00:21:59.402 "seek_hole": false, 00:21:59.402 "seek_data": false, 00:21:59.402 "copy": true, 00:21:59.402 "nvme_iov_md": false 00:21:59.402 }, 00:21:59.402 "memory_domains": [ 00:21:59.402 { 00:21:59.402 "dma_device_id": "system", 00:21:59.402 "dma_device_type": 1 00:21:59.402 }, 00:21:59.402 { 00:21:59.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.402 "dma_device_type": 2 00:21:59.402 } 00:21:59.402 ], 00:21:59.402 "driver_specific": {} 00:21:59.402 }' 00:21:59.402 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.661 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:59.661 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:59.661 13:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.662 13:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.662 13:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:59.662 13:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.662 13:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.662 13:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:59.662 13:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.921 13:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.921 13:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:59.921 13:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:00.180 [2024-07-25 13:22:10.435160] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:00.180 [2024-07-25 13:22:10.435186] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:00.180 [2024-07-25 13:22:10.435232] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:00.180 [2024-07-25 13:22:10.435488] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:00.180 [2024-07-25 13:22:10.435500] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d6ded0 name Existed_Raid, state offline 00:22:00.180 13:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 941707 00:22:00.180 13:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 941707 ']' 00:22:00.180 13:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 941707 00:22:00.180 13:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:22:00.180 13:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.180 13:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 941707 00:22:00.180 13:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:00.180 13:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:00.180 13:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 941707' 00:22:00.180 killing process with pid 941707 00:22:00.180 13:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 941707 00:22:00.180 [2024-07-25 13:22:10.507701] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:00.180 13:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 941707 00:22:00.180 [2024-07-25 13:22:10.539269] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:00.439 13:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:00.439 00:22:00.439 real 0m30.491s 00:22:00.439 user 0m55.811s 00:22:00.439 sys 0m5.631s 00:22:00.439 13:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:00.439 13:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:00.439 ************************************ 00:22:00.439 END TEST raid_state_function_test_sb 00:22:00.439 ************************************ 00:22:00.439 13:22:10 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:22:00.439 13:22:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:00.439 13:22:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:00.439 13:22:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:00.439 ************************************ 00:22:00.439 START TEST raid_superblock_test 00:22:00.439 ************************************ 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=947409 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 947409 /var/tmp/spdk-raid.sock 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 947409 ']' 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:00.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.439 13:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.439 [2024-07-25 13:22:10.878359] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:22:00.439 [2024-07-25 13:22:10.878418] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947409 ] 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:00.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:00.698 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:00.698 [2024-07-25 13:22:11.011616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.698 [2024-07-25 13:22:11.095269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.698 [2024-07-25 13:22:11.155658] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.698 [2024-07-25 13:22:11.155697] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:01.634 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.634 13:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:22:01.634 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:22:01.634 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:01.634 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:22:01.634 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:22:01.634 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:01.634 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:01.634 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:01.634 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:01.634 13:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:01.634 malloc1 00:22:01.634 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:01.893 [2024-07-25 13:22:12.212221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:01.893 [2024-07-25 13:22:12.212266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.893 [2024-07-25 13:22:12.212283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19fa2f0 00:22:01.893 [2024-07-25 13:22:12.212294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.893 [2024-07-25 13:22:12.213743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.893 [2024-07-25 13:22:12.213770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:01.893 pt1 00:22:01.893 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:01.893 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:01.893 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:22:01.893 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:22:01.893 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:01.893 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:01.893 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:01.893 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:01.893 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:02.153 malloc2 00:22:02.153 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:02.411 [2024-07-25 13:22:12.669850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:02.411 [2024-07-25 13:22:12.669891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.411 [2024-07-25 13:22:12.669907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b91f70 00:22:02.411 [2024-07-25 13:22:12.669918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.411 [2024-07-25 13:22:12.671252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.411 [2024-07-25 13:22:12.671278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:02.411 pt2 00:22:02.411 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:02.411 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:02.411 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:22:02.411 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:22:02.411 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:02.411 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:02.411 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:02.411 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:02.411 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:02.670 malloc3 00:22:02.670 13:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:02.670 [2024-07-25 13:22:13.131119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:02.670 [2024-07-25 13:22:13.131162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.670 [2024-07-25 13:22:13.131177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b95830 00:22:02.670 [2024-07-25 13:22:13.131189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.670 [2024-07-25 13:22:13.132444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.670 [2024-07-25 13:22:13.132470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:02.670 pt3 00:22:02.670 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:02.670 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:02.670 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:22:02.670 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:22:02.670 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:02.670 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:02.670 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:02.670 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:02.670 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:02.929 malloc4 00:22:02.929 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:03.188 [2024-07-25 13:22:13.584347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:03.188 [2024-07-25 13:22:13.584385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.188 [2024-07-25 13:22:13.584401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b96f10 00:22:03.188 [2024-07-25 13:22:13.584413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.188 [2024-07-25 13:22:13.585672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.188 [2024-07-25 13:22:13.585697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:03.188 pt4 00:22:03.188 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:03.188 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:03.188 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:03.446 [2024-07-25 13:22:13.808953] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:03.446 [2024-07-25 13:22:13.810023] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:03.446 [2024-07-25 13:22:13.810074] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:03.446 [2024-07-25 13:22:13.810114] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:03.446 [2024-07-25 13:22:13.810262] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b98190 00:22:03.446 [2024-07-25 13:22:13.810272] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:03.446 [2024-07-25 13:22:13.810438] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b96c30 00:22:03.446 [2024-07-25 13:22:13.810563] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b98190 00:22:03.446 [2024-07-25 13:22:13.810572] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b98190 00:22:03.446 [2024-07-25 13:22:13.810670] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.446 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:03.446 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:03.447 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:03.447 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:03.447 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:03.447 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:03.447 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.447 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.447 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.447 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.447 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.447 13:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.705 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.705 "name": "raid_bdev1", 00:22:03.705 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:03.705 "strip_size_kb": 0, 00:22:03.705 "state": "online", 00:22:03.706 "raid_level": "raid1", 00:22:03.706 "superblock": true, 00:22:03.706 "num_base_bdevs": 4, 00:22:03.706 "num_base_bdevs_discovered": 4, 00:22:03.706 "num_base_bdevs_operational": 4, 00:22:03.706 "base_bdevs_list": [ 00:22:03.706 { 00:22:03.706 "name": "pt1", 00:22:03.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:03.706 "is_configured": true, 00:22:03.706 "data_offset": 2048, 00:22:03.706 "data_size": 63488 00:22:03.706 }, 00:22:03.706 { 00:22:03.706 "name": "pt2", 00:22:03.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:03.706 "is_configured": true, 00:22:03.706 "data_offset": 2048, 00:22:03.706 "data_size": 63488 00:22:03.706 }, 00:22:03.706 { 00:22:03.706 "name": "pt3", 00:22:03.706 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:03.706 "is_configured": true, 00:22:03.706 "data_offset": 2048, 00:22:03.706 "data_size": 63488 00:22:03.706 }, 00:22:03.706 { 00:22:03.706 "name": "pt4", 00:22:03.706 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:03.706 "is_configured": true, 00:22:03.706 "data_offset": 2048, 00:22:03.706 "data_size": 63488 00:22:03.706 } 00:22:03.706 ] 00:22:03.706 }' 00:22:03.706 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.706 13:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.274 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:22:04.274 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:04.274 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:04.274 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:04.274 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:04.274 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:04.274 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:04.274 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:04.274 [2024-07-25 13:22:14.707587] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:04.274 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:04.274 "name": "raid_bdev1", 00:22:04.274 "aliases": [ 00:22:04.274 "4926dc79-ac10-47a9-af1c-8749df5b9fed" 00:22:04.274 ], 00:22:04.274 "product_name": "Raid Volume", 00:22:04.274 "block_size": 512, 00:22:04.274 "num_blocks": 63488, 00:22:04.274 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:04.274 "assigned_rate_limits": { 00:22:04.274 "rw_ios_per_sec": 0, 00:22:04.274 "rw_mbytes_per_sec": 0, 00:22:04.274 "r_mbytes_per_sec": 0, 00:22:04.274 "w_mbytes_per_sec": 0 00:22:04.274 }, 00:22:04.274 "claimed": false, 00:22:04.274 "zoned": false, 00:22:04.274 "supported_io_types": { 00:22:04.274 "read": true, 00:22:04.274 "write": true, 00:22:04.274 "unmap": false, 00:22:04.274 "flush": false, 00:22:04.274 "reset": true, 00:22:04.274 "nvme_admin": false, 00:22:04.274 "nvme_io": false, 00:22:04.274 "nvme_io_md": false, 00:22:04.274 "write_zeroes": true, 00:22:04.274 "zcopy": false, 00:22:04.274 "get_zone_info": false, 00:22:04.274 "zone_management": false, 00:22:04.274 "zone_append": false, 00:22:04.274 "compare": false, 00:22:04.274 "compare_and_write": false, 00:22:04.274 "abort": false, 00:22:04.274 "seek_hole": false, 00:22:04.274 "seek_data": false, 00:22:04.274 "copy": false, 00:22:04.274 "nvme_iov_md": false 00:22:04.274 }, 00:22:04.274 "memory_domains": [ 00:22:04.274 { 00:22:04.274 "dma_device_id": "system", 00:22:04.274 "dma_device_type": 1 00:22:04.274 }, 00:22:04.274 { 00:22:04.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.274 "dma_device_type": 2 00:22:04.274 }, 00:22:04.274 { 00:22:04.274 "dma_device_id": "system", 00:22:04.274 "dma_device_type": 1 00:22:04.274 }, 00:22:04.274 { 00:22:04.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.274 "dma_device_type": 2 00:22:04.274 }, 00:22:04.274 { 00:22:04.274 "dma_device_id": "system", 00:22:04.275 "dma_device_type": 1 00:22:04.275 }, 00:22:04.275 { 00:22:04.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.275 "dma_device_type": 2 00:22:04.275 }, 00:22:04.275 { 00:22:04.275 "dma_device_id": "system", 00:22:04.275 "dma_device_type": 1 00:22:04.275 }, 00:22:04.275 { 00:22:04.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.275 "dma_device_type": 2 00:22:04.275 } 00:22:04.275 ], 00:22:04.275 "driver_specific": { 00:22:04.275 "raid": { 00:22:04.275 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:04.275 "strip_size_kb": 0, 00:22:04.275 "state": "online", 00:22:04.275 "raid_level": "raid1", 00:22:04.275 "superblock": true, 00:22:04.275 "num_base_bdevs": 4, 00:22:04.275 "num_base_bdevs_discovered": 4, 00:22:04.275 "num_base_bdevs_operational": 4, 00:22:04.275 "base_bdevs_list": [ 00:22:04.275 { 00:22:04.275 "name": "pt1", 00:22:04.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:04.275 "is_configured": true, 00:22:04.275 "data_offset": 2048, 00:22:04.275 "data_size": 63488 00:22:04.275 }, 00:22:04.275 { 00:22:04.275 "name": "pt2", 00:22:04.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:04.275 "is_configured": true, 00:22:04.275 "data_offset": 2048, 00:22:04.275 "data_size": 63488 00:22:04.275 }, 00:22:04.275 { 00:22:04.275 "name": "pt3", 00:22:04.275 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:04.275 "is_configured": true, 00:22:04.275 "data_offset": 2048, 00:22:04.275 "data_size": 63488 00:22:04.275 }, 00:22:04.275 { 00:22:04.275 "name": "pt4", 00:22:04.275 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:04.275 "is_configured": true, 00:22:04.275 "data_offset": 2048, 00:22:04.275 "data_size": 63488 00:22:04.275 } 00:22:04.275 ] 00:22:04.275 } 00:22:04.275 } 00:22:04.275 }' 00:22:04.275 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:04.534 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:04.534 pt2 00:22:04.534 pt3 00:22:04.534 pt4' 00:22:04.534 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:04.534 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:04.534 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:04.534 13:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:04.534 "name": "pt1", 00:22:04.534 "aliases": [ 00:22:04.534 "00000000-0000-0000-0000-000000000001" 00:22:04.534 ], 00:22:04.534 "product_name": "passthru", 00:22:04.534 "block_size": 512, 00:22:04.534 "num_blocks": 65536, 00:22:04.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:04.534 "assigned_rate_limits": { 00:22:04.534 "rw_ios_per_sec": 0, 00:22:04.534 "rw_mbytes_per_sec": 0, 00:22:04.534 "r_mbytes_per_sec": 0, 00:22:04.534 "w_mbytes_per_sec": 0 00:22:04.534 }, 00:22:04.534 "claimed": true, 00:22:04.534 "claim_type": "exclusive_write", 00:22:04.534 "zoned": false, 00:22:04.534 "supported_io_types": { 00:22:04.534 "read": true, 00:22:04.534 "write": true, 00:22:04.534 "unmap": true, 00:22:04.534 "flush": true, 00:22:04.534 "reset": true, 00:22:04.534 "nvme_admin": false, 00:22:04.534 "nvme_io": false, 00:22:04.534 "nvme_io_md": false, 00:22:04.534 "write_zeroes": true, 00:22:04.534 "zcopy": true, 00:22:04.534 "get_zone_info": false, 00:22:04.534 "zone_management": false, 00:22:04.534 "zone_append": false, 00:22:04.534 "compare": false, 00:22:04.534 "compare_and_write": false, 00:22:04.534 "abort": true, 00:22:04.534 "seek_hole": false, 00:22:04.534 "seek_data": false, 00:22:04.534 "copy": true, 00:22:04.534 "nvme_iov_md": false 00:22:04.534 }, 00:22:04.534 "memory_domains": [ 00:22:04.534 { 00:22:04.534 "dma_device_id": "system", 00:22:04.534 "dma_device_type": 1 00:22:04.534 }, 00:22:04.534 { 00:22:04.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.534 "dma_device_type": 2 00:22:04.534 } 00:22:04.534 ], 00:22:04.534 "driver_specific": { 00:22:04.534 "passthru": { 00:22:04.534 "name": "pt1", 00:22:04.534 "base_bdev_name": "malloc1" 00:22:04.534 } 00:22:04.534 } 00:22:04.534 }' 00:22:04.534 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:04.793 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:04.793 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:04.793 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:04.793 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:04.793 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:04.793 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:04.793 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:04.793 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:04.793 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.051 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.051 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:05.051 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:05.051 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:05.051 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.339 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.339 "name": "pt2", 00:22:05.339 "aliases": [ 00:22:05.339 "00000000-0000-0000-0000-000000000002" 00:22:05.339 ], 00:22:05.339 "product_name": "passthru", 00:22:05.339 "block_size": 512, 00:22:05.339 "num_blocks": 65536, 00:22:05.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:05.339 "assigned_rate_limits": { 00:22:05.339 "rw_ios_per_sec": 0, 00:22:05.339 "rw_mbytes_per_sec": 0, 00:22:05.339 "r_mbytes_per_sec": 0, 00:22:05.339 "w_mbytes_per_sec": 0 00:22:05.339 }, 00:22:05.339 "claimed": true, 00:22:05.339 "claim_type": "exclusive_write", 00:22:05.339 "zoned": false, 00:22:05.339 "supported_io_types": { 00:22:05.339 "read": true, 00:22:05.339 "write": true, 00:22:05.339 "unmap": true, 00:22:05.339 "flush": true, 00:22:05.339 "reset": true, 00:22:05.339 "nvme_admin": false, 00:22:05.339 "nvme_io": false, 00:22:05.339 "nvme_io_md": false, 00:22:05.339 "write_zeroes": true, 00:22:05.339 "zcopy": true, 00:22:05.339 "get_zone_info": false, 00:22:05.339 "zone_management": false, 00:22:05.339 "zone_append": false, 00:22:05.339 "compare": false, 00:22:05.339 "compare_and_write": false, 00:22:05.339 "abort": true, 00:22:05.339 "seek_hole": false, 00:22:05.339 "seek_data": false, 00:22:05.339 "copy": true, 00:22:05.339 "nvme_iov_md": false 00:22:05.339 }, 00:22:05.339 "memory_domains": [ 00:22:05.339 { 00:22:05.339 "dma_device_id": "system", 00:22:05.339 "dma_device_type": 1 00:22:05.339 }, 00:22:05.339 { 00:22:05.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.339 "dma_device_type": 2 00:22:05.339 } 00:22:05.339 ], 00:22:05.339 "driver_specific": { 00:22:05.339 "passthru": { 00:22:05.339 "name": "pt2", 00:22:05.339 "base_bdev_name": "malloc2" 00:22:05.339 } 00:22:05.339 } 00:22:05.339 }' 00:22:05.339 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.339 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.339 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:05.339 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.339 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.339 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:05.339 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.339 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.339 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:05.339 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.598 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.598 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:05.598 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:05.598 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:05.598 13:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.857 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.857 "name": "pt3", 00:22:05.857 "aliases": [ 00:22:05.857 "00000000-0000-0000-0000-000000000003" 00:22:05.857 ], 00:22:05.857 "product_name": "passthru", 00:22:05.857 "block_size": 512, 00:22:05.857 "num_blocks": 65536, 00:22:05.857 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:05.857 "assigned_rate_limits": { 00:22:05.857 "rw_ios_per_sec": 0, 00:22:05.857 "rw_mbytes_per_sec": 0, 00:22:05.857 "r_mbytes_per_sec": 0, 00:22:05.857 "w_mbytes_per_sec": 0 00:22:05.857 }, 00:22:05.857 "claimed": true, 00:22:05.857 "claim_type": "exclusive_write", 00:22:05.857 "zoned": false, 00:22:05.857 "supported_io_types": { 00:22:05.857 "read": true, 00:22:05.857 "write": true, 00:22:05.857 "unmap": true, 00:22:05.857 "flush": true, 00:22:05.857 "reset": true, 00:22:05.857 "nvme_admin": false, 00:22:05.857 "nvme_io": false, 00:22:05.857 "nvme_io_md": false, 00:22:05.857 "write_zeroes": true, 00:22:05.857 "zcopy": true, 00:22:05.857 "get_zone_info": false, 00:22:05.857 "zone_management": false, 00:22:05.857 "zone_append": false, 00:22:05.857 "compare": false, 00:22:05.857 "compare_and_write": false, 00:22:05.857 "abort": true, 00:22:05.857 "seek_hole": false, 00:22:05.857 "seek_data": false, 00:22:05.857 "copy": true, 00:22:05.857 "nvme_iov_md": false 00:22:05.857 }, 00:22:05.857 "memory_domains": [ 00:22:05.857 { 00:22:05.857 "dma_device_id": "system", 00:22:05.857 "dma_device_type": 1 00:22:05.857 }, 00:22:05.857 { 00:22:05.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.857 "dma_device_type": 2 00:22:05.857 } 00:22:05.857 ], 00:22:05.857 "driver_specific": { 00:22:05.857 "passthru": { 00:22:05.857 "name": "pt3", 00:22:05.857 "base_bdev_name": "malloc3" 00:22:05.857 } 00:22:05.858 } 00:22:05.858 }' 00:22:05.858 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.858 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.858 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:05.858 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.858 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.858 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:05.858 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.858 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.858 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:05.858 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.117 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.117 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:06.117 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:06.117 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:06.117 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:06.375 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:06.375 "name": "pt4", 00:22:06.375 "aliases": [ 00:22:06.375 "00000000-0000-0000-0000-000000000004" 00:22:06.375 ], 00:22:06.375 "product_name": "passthru", 00:22:06.375 "block_size": 512, 00:22:06.375 "num_blocks": 65536, 00:22:06.375 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:06.375 "assigned_rate_limits": { 00:22:06.375 "rw_ios_per_sec": 0, 00:22:06.375 "rw_mbytes_per_sec": 0, 00:22:06.375 "r_mbytes_per_sec": 0, 00:22:06.375 "w_mbytes_per_sec": 0 00:22:06.375 }, 00:22:06.375 "claimed": true, 00:22:06.375 "claim_type": "exclusive_write", 00:22:06.375 "zoned": false, 00:22:06.375 "supported_io_types": { 00:22:06.375 "read": true, 00:22:06.375 "write": true, 00:22:06.375 "unmap": true, 00:22:06.375 "flush": true, 00:22:06.375 "reset": true, 00:22:06.375 "nvme_admin": false, 00:22:06.375 "nvme_io": false, 00:22:06.375 "nvme_io_md": false, 00:22:06.375 "write_zeroes": true, 00:22:06.375 "zcopy": true, 00:22:06.375 "get_zone_info": false, 00:22:06.375 "zone_management": false, 00:22:06.375 "zone_append": false, 00:22:06.375 "compare": false, 00:22:06.375 "compare_and_write": false, 00:22:06.375 "abort": true, 00:22:06.375 "seek_hole": false, 00:22:06.375 "seek_data": false, 00:22:06.375 "copy": true, 00:22:06.375 "nvme_iov_md": false 00:22:06.375 }, 00:22:06.375 "memory_domains": [ 00:22:06.375 { 00:22:06.375 "dma_device_id": "system", 00:22:06.375 "dma_device_type": 1 00:22:06.375 }, 00:22:06.375 { 00:22:06.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.375 "dma_device_type": 2 00:22:06.375 } 00:22:06.375 ], 00:22:06.375 "driver_specific": { 00:22:06.375 "passthru": { 00:22:06.375 "name": "pt4", 00:22:06.375 "base_bdev_name": "malloc4" 00:22:06.375 } 00:22:06.375 } 00:22:06.375 }' 00:22:06.375 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.375 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.375 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:06.375 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.375 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.375 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:06.375 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.375 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.633 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:06.633 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.634 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.634 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:06.634 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:06.634 13:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:22:06.893 [2024-07-25 13:22:17.162019] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:06.893 13:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=4926dc79-ac10-47a9-af1c-8749df5b9fed 00:22:06.893 13:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 4926dc79-ac10-47a9-af1c-8749df5b9fed ']' 00:22:06.893 13:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:07.151 [2024-07-25 13:22:17.394347] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:07.151 [2024-07-25 13:22:17.394363] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:07.151 [2024-07-25 13:22:17.394407] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:07.151 [2024-07-25 13:22:17.394479] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:07.151 [2024-07-25 13:22:17.394489] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b98190 name raid_bdev1, state offline 00:22:07.151 13:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.151 13:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:22:07.410 13:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:22:07.410 13:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:22:07.410 13:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:07.410 13:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:07.410 13:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:07.410 13:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:07.669 13:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:07.669 13:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:07.929 13:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:07.929 13:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:08.188 13:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:08.188 13:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:08.447 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:08.707 [2024-07-25 13:22:18.966430] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:08.707 [2024-07-25 13:22:18.967700] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:08.707 [2024-07-25 13:22:18.967740] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:08.707 [2024-07-25 13:22:18.967771] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:08.707 [2024-07-25 13:22:18.967813] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:08.707 [2024-07-25 13:22:18.967850] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:08.707 [2024-07-25 13:22:18.967872] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:08.707 [2024-07-25 13:22:18.967893] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:08.707 [2024-07-25 13:22:18.967910] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:08.707 [2024-07-25 13:22:18.967920] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b968d0 name raid_bdev1, state configuring 00:22:08.707 request: 00:22:08.707 { 00:22:08.707 "name": "raid_bdev1", 00:22:08.707 "raid_level": "raid1", 00:22:08.707 "base_bdevs": [ 00:22:08.707 "malloc1", 00:22:08.707 "malloc2", 00:22:08.707 "malloc3", 00:22:08.707 "malloc4" 00:22:08.707 ], 00:22:08.707 "superblock": false, 00:22:08.707 "method": "bdev_raid_create", 00:22:08.707 "req_id": 1 00:22:08.707 } 00:22:08.707 Got JSON-RPC error response 00:22:08.707 response: 00:22:08.707 { 00:22:08.707 "code": -17, 00:22:08.707 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:08.707 } 00:22:08.707 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:22:08.707 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:08.707 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:08.707 13:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:08.707 13:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.707 13:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:22:08.966 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:22:08.966 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:22:08.966 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:08.966 [2024-07-25 13:22:19.411545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:08.966 [2024-07-25 13:22:19.411579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.966 [2024-07-25 13:22:19.411595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b968d0 00:22:08.966 [2024-07-25 13:22:19.411606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.966 [2024-07-25 13:22:19.413093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.966 [2024-07-25 13:22:19.413122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:08.966 [2024-07-25 13:22:19.413188] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:08.966 [2024-07-25 13:22:19.413214] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:08.966 pt1 00:22:08.966 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:08.966 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:08.966 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:08.966 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:08.966 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:08.966 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:08.966 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.966 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.967 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.967 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.967 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.967 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.226 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:09.226 "name": "raid_bdev1", 00:22:09.226 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:09.226 "strip_size_kb": 0, 00:22:09.226 "state": "configuring", 00:22:09.226 "raid_level": "raid1", 00:22:09.226 "superblock": true, 00:22:09.226 "num_base_bdevs": 4, 00:22:09.226 "num_base_bdevs_discovered": 1, 00:22:09.226 "num_base_bdevs_operational": 4, 00:22:09.226 "base_bdevs_list": [ 00:22:09.226 { 00:22:09.226 "name": "pt1", 00:22:09.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:09.226 "is_configured": true, 00:22:09.226 "data_offset": 2048, 00:22:09.226 "data_size": 63488 00:22:09.226 }, 00:22:09.226 { 00:22:09.226 "name": null, 00:22:09.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:09.226 "is_configured": false, 00:22:09.226 "data_offset": 2048, 00:22:09.226 "data_size": 63488 00:22:09.226 }, 00:22:09.226 { 00:22:09.226 "name": null, 00:22:09.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:09.226 "is_configured": false, 00:22:09.226 "data_offset": 2048, 00:22:09.226 "data_size": 63488 00:22:09.226 }, 00:22:09.226 { 00:22:09.226 "name": null, 00:22:09.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:09.226 "is_configured": false, 00:22:09.226 "data_offset": 2048, 00:22:09.226 "data_size": 63488 00:22:09.226 } 00:22:09.226 ] 00:22:09.226 }' 00:22:09.226 13:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:09.226 13:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.794 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:22:09.794 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:10.053 [2024-07-25 13:22:20.458329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:10.053 [2024-07-25 13:22:20.458382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.053 [2024-07-25 13:22:20.458399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b997d0 00:22:10.053 [2024-07-25 13:22:20.458411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.053 [2024-07-25 13:22:20.458744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.053 [2024-07-25 13:22:20.458761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:10.053 [2024-07-25 13:22:20.458821] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:10.053 [2024-07-25 13:22:20.458838] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:10.053 pt2 00:22:10.053 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:10.312 [2024-07-25 13:22:20.686930] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:10.312 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:10.312 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:10.312 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:10.312 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:10.312 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:10.312 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:10.312 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:10.312 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:10.312 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:10.312 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:10.312 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.312 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.571 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.571 "name": "raid_bdev1", 00:22:10.571 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:10.571 "strip_size_kb": 0, 00:22:10.571 "state": "configuring", 00:22:10.571 "raid_level": "raid1", 00:22:10.571 "superblock": true, 00:22:10.571 "num_base_bdevs": 4, 00:22:10.571 "num_base_bdevs_discovered": 1, 00:22:10.571 "num_base_bdevs_operational": 4, 00:22:10.571 "base_bdevs_list": [ 00:22:10.571 { 00:22:10.571 "name": "pt1", 00:22:10.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:10.571 "is_configured": true, 00:22:10.571 "data_offset": 2048, 00:22:10.571 "data_size": 63488 00:22:10.571 }, 00:22:10.571 { 00:22:10.571 "name": null, 00:22:10.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:10.571 "is_configured": false, 00:22:10.571 "data_offset": 2048, 00:22:10.571 "data_size": 63488 00:22:10.571 }, 00:22:10.571 { 00:22:10.571 "name": null, 00:22:10.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:10.571 "is_configured": false, 00:22:10.571 "data_offset": 2048, 00:22:10.571 "data_size": 63488 00:22:10.571 }, 00:22:10.571 { 00:22:10.571 "name": null, 00:22:10.571 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:10.571 "is_configured": false, 00:22:10.571 "data_offset": 2048, 00:22:10.571 "data_size": 63488 00:22:10.571 } 00:22:10.571 ] 00:22:10.571 }' 00:22:10.571 13:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.571 13:22:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.508 13:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:22:11.508 13:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:11.508 13:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:11.768 [2024-07-25 13:22:22.046522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:11.768 [2024-07-25 13:22:22.046575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.768 [2024-07-25 13:22:22.046597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19fa520 00:22:11.768 [2024-07-25 13:22:22.046609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.768 [2024-07-25 13:22:22.046939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.768 [2024-07-25 13:22:22.046955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:11.768 [2024-07-25 13:22:22.047017] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:11.768 [2024-07-25 13:22:22.047035] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:11.768 pt2 00:22:11.768 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:11.768 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:11.768 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:12.027 [2024-07-25 13:22:22.275110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:12.027 [2024-07-25 13:22:22.275144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.027 [2024-07-25 13:22:22.275159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b9b4d0 00:22:12.027 [2024-07-25 13:22:22.275171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.027 [2024-07-25 13:22:22.275446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.027 [2024-07-25 13:22:22.275462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:12.027 [2024-07-25 13:22:22.275512] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:12.027 [2024-07-25 13:22:22.275528] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:12.027 pt3 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:12.027 [2024-07-25 13:22:22.491693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:12.027 [2024-07-25 13:22:22.491732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.027 [2024-07-25 13:22:22.491750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b977a0 00:22:12.027 [2024-07-25 13:22:22.491761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.027 [2024-07-25 13:22:22.492056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.027 [2024-07-25 13:22:22.492071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:12.027 [2024-07-25 13:22:22.492121] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:12.027 [2024-07-25 13:22:22.492136] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:12.027 [2024-07-25 13:22:22.492260] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b938e0 00:22:12.027 [2024-07-25 13:22:22.492270] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:12.027 [2024-07-25 13:22:22.492434] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b99290 00:22:12.027 [2024-07-25 13:22:22.492557] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b938e0 00:22:12.027 [2024-07-25 13:22:22.492566] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b938e0 00:22:12.027 [2024-07-25 13:22:22.492654] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.027 pt4 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.027 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.287 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.287 "name": "raid_bdev1", 00:22:12.287 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:12.287 "strip_size_kb": 0, 00:22:12.287 "state": "online", 00:22:12.287 "raid_level": "raid1", 00:22:12.287 "superblock": true, 00:22:12.287 "num_base_bdevs": 4, 00:22:12.287 "num_base_bdevs_discovered": 4, 00:22:12.287 "num_base_bdevs_operational": 4, 00:22:12.287 "base_bdevs_list": [ 00:22:12.287 { 00:22:12.287 "name": "pt1", 00:22:12.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:12.287 "is_configured": true, 00:22:12.287 "data_offset": 2048, 00:22:12.287 "data_size": 63488 00:22:12.287 }, 00:22:12.287 { 00:22:12.287 "name": "pt2", 00:22:12.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:12.287 "is_configured": true, 00:22:12.287 "data_offset": 2048, 00:22:12.287 "data_size": 63488 00:22:12.287 }, 00:22:12.287 { 00:22:12.287 "name": "pt3", 00:22:12.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:12.287 "is_configured": true, 00:22:12.287 "data_offset": 2048, 00:22:12.287 "data_size": 63488 00:22:12.287 }, 00:22:12.287 { 00:22:12.287 "name": "pt4", 00:22:12.287 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:12.287 "is_configured": true, 00:22:12.287 "data_offset": 2048, 00:22:12.287 "data_size": 63488 00:22:12.287 } 00:22:12.287 ] 00:22:12.287 }' 00:22:12.287 13:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.287 13:22:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.223 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:22:13.223 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:13.223 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:13.223 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:13.223 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:13.223 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:13.223 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:13.223 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:13.481 [2024-07-25 13:22:23.819500] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:13.481 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:13.481 "name": "raid_bdev1", 00:22:13.481 "aliases": [ 00:22:13.481 "4926dc79-ac10-47a9-af1c-8749df5b9fed" 00:22:13.481 ], 00:22:13.481 "product_name": "Raid Volume", 00:22:13.481 "block_size": 512, 00:22:13.481 "num_blocks": 63488, 00:22:13.481 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:13.482 "assigned_rate_limits": { 00:22:13.482 "rw_ios_per_sec": 0, 00:22:13.482 "rw_mbytes_per_sec": 0, 00:22:13.482 "r_mbytes_per_sec": 0, 00:22:13.482 "w_mbytes_per_sec": 0 00:22:13.482 }, 00:22:13.482 "claimed": false, 00:22:13.482 "zoned": false, 00:22:13.482 "supported_io_types": { 00:22:13.482 "read": true, 00:22:13.482 "write": true, 00:22:13.482 "unmap": false, 00:22:13.482 "flush": false, 00:22:13.482 "reset": true, 00:22:13.482 "nvme_admin": false, 00:22:13.482 "nvme_io": false, 00:22:13.482 "nvme_io_md": false, 00:22:13.482 "write_zeroes": true, 00:22:13.482 "zcopy": false, 00:22:13.482 "get_zone_info": false, 00:22:13.482 "zone_management": false, 00:22:13.482 "zone_append": false, 00:22:13.482 "compare": false, 00:22:13.482 "compare_and_write": false, 00:22:13.482 "abort": false, 00:22:13.482 "seek_hole": false, 00:22:13.482 "seek_data": false, 00:22:13.482 "copy": false, 00:22:13.482 "nvme_iov_md": false 00:22:13.482 }, 00:22:13.482 "memory_domains": [ 00:22:13.482 { 00:22:13.482 "dma_device_id": "system", 00:22:13.482 "dma_device_type": 1 00:22:13.482 }, 00:22:13.482 { 00:22:13.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.482 "dma_device_type": 2 00:22:13.482 }, 00:22:13.482 { 00:22:13.482 "dma_device_id": "system", 00:22:13.482 "dma_device_type": 1 00:22:13.482 }, 00:22:13.482 { 00:22:13.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.482 "dma_device_type": 2 00:22:13.482 }, 00:22:13.482 { 00:22:13.482 "dma_device_id": "system", 00:22:13.482 "dma_device_type": 1 00:22:13.482 }, 00:22:13.482 { 00:22:13.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.482 "dma_device_type": 2 00:22:13.482 }, 00:22:13.482 { 00:22:13.482 "dma_device_id": "system", 00:22:13.482 "dma_device_type": 1 00:22:13.482 }, 00:22:13.482 { 00:22:13.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.482 "dma_device_type": 2 00:22:13.482 } 00:22:13.482 ], 00:22:13.482 "driver_specific": { 00:22:13.482 "raid": { 00:22:13.482 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:13.482 "strip_size_kb": 0, 00:22:13.482 "state": "online", 00:22:13.482 "raid_level": "raid1", 00:22:13.482 "superblock": true, 00:22:13.482 "num_base_bdevs": 4, 00:22:13.482 "num_base_bdevs_discovered": 4, 00:22:13.482 "num_base_bdevs_operational": 4, 00:22:13.482 "base_bdevs_list": [ 00:22:13.482 { 00:22:13.482 "name": "pt1", 00:22:13.482 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:13.482 "is_configured": true, 00:22:13.482 "data_offset": 2048, 00:22:13.482 "data_size": 63488 00:22:13.482 }, 00:22:13.482 { 00:22:13.482 "name": "pt2", 00:22:13.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:13.482 "is_configured": true, 00:22:13.482 "data_offset": 2048, 00:22:13.482 "data_size": 63488 00:22:13.482 }, 00:22:13.482 { 00:22:13.482 "name": "pt3", 00:22:13.482 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:13.482 "is_configured": true, 00:22:13.482 "data_offset": 2048, 00:22:13.482 "data_size": 63488 00:22:13.482 }, 00:22:13.482 { 00:22:13.482 "name": "pt4", 00:22:13.482 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:13.482 "is_configured": true, 00:22:13.482 "data_offset": 2048, 00:22:13.482 "data_size": 63488 00:22:13.482 } 00:22:13.482 ] 00:22:13.482 } 00:22:13.482 } 00:22:13.482 }' 00:22:13.482 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:13.482 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:13.482 pt2 00:22:13.482 pt3 00:22:13.482 pt4' 00:22:13.482 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:13.482 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:13.482 13:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:13.740 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:13.740 "name": "pt1", 00:22:13.740 "aliases": [ 00:22:13.740 "00000000-0000-0000-0000-000000000001" 00:22:13.740 ], 00:22:13.740 "product_name": "passthru", 00:22:13.740 "block_size": 512, 00:22:13.740 "num_blocks": 65536, 00:22:13.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:13.740 "assigned_rate_limits": { 00:22:13.740 "rw_ios_per_sec": 0, 00:22:13.740 "rw_mbytes_per_sec": 0, 00:22:13.740 "r_mbytes_per_sec": 0, 00:22:13.740 "w_mbytes_per_sec": 0 00:22:13.740 }, 00:22:13.740 "claimed": true, 00:22:13.740 "claim_type": "exclusive_write", 00:22:13.740 "zoned": false, 00:22:13.740 "supported_io_types": { 00:22:13.740 "read": true, 00:22:13.740 "write": true, 00:22:13.740 "unmap": true, 00:22:13.740 "flush": true, 00:22:13.740 "reset": true, 00:22:13.740 "nvme_admin": false, 00:22:13.740 "nvme_io": false, 00:22:13.740 "nvme_io_md": false, 00:22:13.740 "write_zeroes": true, 00:22:13.740 "zcopy": true, 00:22:13.740 "get_zone_info": false, 00:22:13.740 "zone_management": false, 00:22:13.740 "zone_append": false, 00:22:13.740 "compare": false, 00:22:13.740 "compare_and_write": false, 00:22:13.740 "abort": true, 00:22:13.740 "seek_hole": false, 00:22:13.740 "seek_data": false, 00:22:13.740 "copy": true, 00:22:13.740 "nvme_iov_md": false 00:22:13.740 }, 00:22:13.740 "memory_domains": [ 00:22:13.740 { 00:22:13.740 "dma_device_id": "system", 00:22:13.740 "dma_device_type": 1 00:22:13.740 }, 00:22:13.740 { 00:22:13.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.740 "dma_device_type": 2 00:22:13.740 } 00:22:13.740 ], 00:22:13.740 "driver_specific": { 00:22:13.740 "passthru": { 00:22:13.740 "name": "pt1", 00:22:13.740 "base_bdev_name": "malloc1" 00:22:13.740 } 00:22:13.740 } 00:22:13.740 }' 00:22:13.740 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.740 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.740 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:13.740 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.999 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.999 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:13.999 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:13.999 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:13.999 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:13.999 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:13.999 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:13.999 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:13.999 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:13.999 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:13.999 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:14.258 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:14.258 "name": "pt2", 00:22:14.258 "aliases": [ 00:22:14.258 "00000000-0000-0000-0000-000000000002" 00:22:14.258 ], 00:22:14.258 "product_name": "passthru", 00:22:14.258 "block_size": 512, 00:22:14.258 "num_blocks": 65536, 00:22:14.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.258 "assigned_rate_limits": { 00:22:14.258 "rw_ios_per_sec": 0, 00:22:14.258 "rw_mbytes_per_sec": 0, 00:22:14.258 "r_mbytes_per_sec": 0, 00:22:14.258 "w_mbytes_per_sec": 0 00:22:14.258 }, 00:22:14.258 "claimed": true, 00:22:14.258 "claim_type": "exclusive_write", 00:22:14.258 "zoned": false, 00:22:14.258 "supported_io_types": { 00:22:14.258 "read": true, 00:22:14.258 "write": true, 00:22:14.258 "unmap": true, 00:22:14.258 "flush": true, 00:22:14.258 "reset": true, 00:22:14.258 "nvme_admin": false, 00:22:14.258 "nvme_io": false, 00:22:14.258 "nvme_io_md": false, 00:22:14.258 "write_zeroes": true, 00:22:14.258 "zcopy": true, 00:22:14.258 "get_zone_info": false, 00:22:14.258 "zone_management": false, 00:22:14.258 "zone_append": false, 00:22:14.258 "compare": false, 00:22:14.258 "compare_and_write": false, 00:22:14.258 "abort": true, 00:22:14.258 "seek_hole": false, 00:22:14.258 "seek_data": false, 00:22:14.258 "copy": true, 00:22:14.258 "nvme_iov_md": false 00:22:14.258 }, 00:22:14.258 "memory_domains": [ 00:22:14.258 { 00:22:14.258 "dma_device_id": "system", 00:22:14.258 "dma_device_type": 1 00:22:14.258 }, 00:22:14.258 { 00:22:14.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.258 "dma_device_type": 2 00:22:14.258 } 00:22:14.258 ], 00:22:14.258 "driver_specific": { 00:22:14.258 "passthru": { 00:22:14.258 "name": "pt2", 00:22:14.258 "base_bdev_name": "malloc2" 00:22:14.258 } 00:22:14.258 } 00:22:14.258 }' 00:22:14.258 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.258 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.517 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:14.517 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.517 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.517 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:14.517 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.517 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.517 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:14.517 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.517 13:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.776 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:14.776 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:14.776 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:14.776 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:15.343 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:15.343 "name": "pt3", 00:22:15.343 "aliases": [ 00:22:15.343 "00000000-0000-0000-0000-000000000003" 00:22:15.343 ], 00:22:15.343 "product_name": "passthru", 00:22:15.343 "block_size": 512, 00:22:15.343 "num_blocks": 65536, 00:22:15.343 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:15.343 "assigned_rate_limits": { 00:22:15.343 "rw_ios_per_sec": 0, 00:22:15.343 "rw_mbytes_per_sec": 0, 00:22:15.343 "r_mbytes_per_sec": 0, 00:22:15.343 "w_mbytes_per_sec": 0 00:22:15.343 }, 00:22:15.343 "claimed": true, 00:22:15.343 "claim_type": "exclusive_write", 00:22:15.343 "zoned": false, 00:22:15.343 "supported_io_types": { 00:22:15.343 "read": true, 00:22:15.343 "write": true, 00:22:15.343 "unmap": true, 00:22:15.343 "flush": true, 00:22:15.343 "reset": true, 00:22:15.343 "nvme_admin": false, 00:22:15.343 "nvme_io": false, 00:22:15.343 "nvme_io_md": false, 00:22:15.343 "write_zeroes": true, 00:22:15.343 "zcopy": true, 00:22:15.343 "get_zone_info": false, 00:22:15.343 "zone_management": false, 00:22:15.343 "zone_append": false, 00:22:15.343 "compare": false, 00:22:15.343 "compare_and_write": false, 00:22:15.343 "abort": true, 00:22:15.343 "seek_hole": false, 00:22:15.343 "seek_data": false, 00:22:15.343 "copy": true, 00:22:15.343 "nvme_iov_md": false 00:22:15.343 }, 00:22:15.343 "memory_domains": [ 00:22:15.343 { 00:22:15.343 "dma_device_id": "system", 00:22:15.343 "dma_device_type": 1 00:22:15.343 }, 00:22:15.343 { 00:22:15.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.343 "dma_device_type": 2 00:22:15.343 } 00:22:15.343 ], 00:22:15.343 "driver_specific": { 00:22:15.343 "passthru": { 00:22:15.343 "name": "pt3", 00:22:15.343 "base_bdev_name": "malloc3" 00:22:15.343 } 00:22:15.343 } 00:22:15.343 }' 00:22:15.343 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.343 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.343 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:15.343 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.343 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.343 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:15.343 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.603 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.603 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.603 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.603 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.603 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.603 13:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:15.603 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:15.603 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:15.862 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:15.862 "name": "pt4", 00:22:15.862 "aliases": [ 00:22:15.862 "00000000-0000-0000-0000-000000000004" 00:22:15.862 ], 00:22:15.862 "product_name": "passthru", 00:22:15.862 "block_size": 512, 00:22:15.862 "num_blocks": 65536, 00:22:15.862 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:15.862 "assigned_rate_limits": { 00:22:15.862 "rw_ios_per_sec": 0, 00:22:15.862 "rw_mbytes_per_sec": 0, 00:22:15.862 "r_mbytes_per_sec": 0, 00:22:15.862 "w_mbytes_per_sec": 0 00:22:15.862 }, 00:22:15.862 "claimed": true, 00:22:15.862 "claim_type": "exclusive_write", 00:22:15.862 "zoned": false, 00:22:15.862 "supported_io_types": { 00:22:15.862 "read": true, 00:22:15.862 "write": true, 00:22:15.862 "unmap": true, 00:22:15.862 "flush": true, 00:22:15.862 "reset": true, 00:22:15.862 "nvme_admin": false, 00:22:15.862 "nvme_io": false, 00:22:15.862 "nvme_io_md": false, 00:22:15.862 "write_zeroes": true, 00:22:15.862 "zcopy": true, 00:22:15.862 "get_zone_info": false, 00:22:15.862 "zone_management": false, 00:22:15.862 "zone_append": false, 00:22:15.862 "compare": false, 00:22:15.862 "compare_and_write": false, 00:22:15.862 "abort": true, 00:22:15.862 "seek_hole": false, 00:22:15.862 "seek_data": false, 00:22:15.862 "copy": true, 00:22:15.862 "nvme_iov_md": false 00:22:15.862 }, 00:22:15.862 "memory_domains": [ 00:22:15.862 { 00:22:15.862 "dma_device_id": "system", 00:22:15.862 "dma_device_type": 1 00:22:15.862 }, 00:22:15.862 { 00:22:15.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.862 "dma_device_type": 2 00:22:15.862 } 00:22:15.862 ], 00:22:15.862 "driver_specific": { 00:22:15.862 "passthru": { 00:22:15.862 "name": "pt4", 00:22:15.862 "base_bdev_name": "malloc4" 00:22:15.862 } 00:22:15.862 } 00:22:15.862 }' 00:22:15.862 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.862 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.862 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:15.862 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.121 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.121 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:16.121 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.121 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.121 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:16.121 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.121 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.121 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:16.121 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:16.121 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:22:16.380 [2024-07-25 13:22:26.771279] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:16.380 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 4926dc79-ac10-47a9-af1c-8749df5b9fed '!=' 4926dc79-ac10-47a9-af1c-8749df5b9fed ']' 00:22:16.380 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:22:16.380 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:16.380 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:16.380 13:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:16.639 [2024-07-25 13:22:27.003614] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:16.639 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:16.639 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:16.639 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:16.639 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:16.639 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:16.639 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:16.639 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:16.639 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:16.639 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:16.639 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:16.639 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.639 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.898 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.898 "name": "raid_bdev1", 00:22:16.898 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:16.898 "strip_size_kb": 0, 00:22:16.898 "state": "online", 00:22:16.898 "raid_level": "raid1", 00:22:16.898 "superblock": true, 00:22:16.898 "num_base_bdevs": 4, 00:22:16.898 "num_base_bdevs_discovered": 3, 00:22:16.898 "num_base_bdevs_operational": 3, 00:22:16.898 "base_bdevs_list": [ 00:22:16.898 { 00:22:16.898 "name": null, 00:22:16.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.898 "is_configured": false, 00:22:16.898 "data_offset": 2048, 00:22:16.898 "data_size": 63488 00:22:16.898 }, 00:22:16.898 { 00:22:16.898 "name": "pt2", 00:22:16.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:16.898 "is_configured": true, 00:22:16.898 "data_offset": 2048, 00:22:16.898 "data_size": 63488 00:22:16.898 }, 00:22:16.898 { 00:22:16.898 "name": "pt3", 00:22:16.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:16.898 "is_configured": true, 00:22:16.898 "data_offset": 2048, 00:22:16.898 "data_size": 63488 00:22:16.898 }, 00:22:16.898 { 00:22:16.898 "name": "pt4", 00:22:16.898 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:16.898 "is_configured": true, 00:22:16.898 "data_offset": 2048, 00:22:16.898 "data_size": 63488 00:22:16.898 } 00:22:16.898 ] 00:22:16.898 }' 00:22:16.898 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.898 13:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.466 13:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:18.098 [2024-07-25 13:22:28.311055] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:18.098 [2024-07-25 13:22:28.311084] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:18.098 [2024-07-25 13:22:28.311135] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:18.098 [2024-07-25 13:22:28.311205] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:18.098 [2024-07-25 13:22:28.311216] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b938e0 name raid_bdev1, state offline 00:22:18.098 13:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.098 13:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:22:18.098 13:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:22:18.098 13:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:22:18.098 13:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:18.098 13:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:22:18.098 13:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:18.667 13:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:18.667 13:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:22:18.667 13:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:19.235 13:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.235 13:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:22:19.235 13:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:19.494 13:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.494 13:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:22:19.494 13:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:22:19.494 13:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:22:19.494 13:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:20.062 [2024-07-25 13:22:30.288231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:20.062 [2024-07-25 13:22:30.288288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.062 [2024-07-25 13:22:30.288305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b9b030 00:22:20.062 [2024-07-25 13:22:30.288318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.062 [2024-07-25 13:22:30.289824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.062 [2024-07-25 13:22:30.289853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:20.062 [2024-07-25 13:22:30.289915] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:20.062 [2024-07-25 13:22:30.289938] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:20.062 pt2 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.062 "name": "raid_bdev1", 00:22:20.062 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:20.062 "strip_size_kb": 0, 00:22:20.062 "state": "configuring", 00:22:20.062 "raid_level": "raid1", 00:22:20.062 "superblock": true, 00:22:20.062 "num_base_bdevs": 4, 00:22:20.062 "num_base_bdevs_discovered": 1, 00:22:20.062 "num_base_bdevs_operational": 3, 00:22:20.062 "base_bdevs_list": [ 00:22:20.062 { 00:22:20.062 "name": null, 00:22:20.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.062 "is_configured": false, 00:22:20.062 "data_offset": 2048, 00:22:20.062 "data_size": 63488 00:22:20.062 }, 00:22:20.062 { 00:22:20.062 "name": "pt2", 00:22:20.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.062 "is_configured": true, 00:22:20.062 "data_offset": 2048, 00:22:20.062 "data_size": 63488 00:22:20.062 }, 00:22:20.062 { 00:22:20.062 "name": null, 00:22:20.062 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.062 "is_configured": false, 00:22:20.062 "data_offset": 2048, 00:22:20.062 "data_size": 63488 00:22:20.062 }, 00:22:20.062 { 00:22:20.062 "name": null, 00:22:20.062 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:20.062 "is_configured": false, 00:22:20.062 "data_offset": 2048, 00:22:20.062 "data_size": 63488 00:22:20.062 } 00:22:20.062 ] 00:22:20.062 }' 00:22:20.062 13:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.321 13:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:20.889 [2024-07-25 13:22:31.334967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:20.889 [2024-07-25 13:22:31.335016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.889 [2024-07-25 13:22:31.335034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19f99a0 00:22:20.889 [2024-07-25 13:22:31.335045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.889 [2024-07-25 13:22:31.335388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.889 [2024-07-25 13:22:31.335405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:20.889 [2024-07-25 13:22:31.335467] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:20.889 [2024-07-25 13:22:31.335485] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:20.889 pt3 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.889 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.148 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:21.148 "name": "raid_bdev1", 00:22:21.148 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:21.148 "strip_size_kb": 0, 00:22:21.148 "state": "configuring", 00:22:21.148 "raid_level": "raid1", 00:22:21.148 "superblock": true, 00:22:21.148 "num_base_bdevs": 4, 00:22:21.148 "num_base_bdevs_discovered": 2, 00:22:21.148 "num_base_bdevs_operational": 3, 00:22:21.148 "base_bdevs_list": [ 00:22:21.148 { 00:22:21.148 "name": null, 00:22:21.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.148 "is_configured": false, 00:22:21.148 "data_offset": 2048, 00:22:21.148 "data_size": 63488 00:22:21.148 }, 00:22:21.148 { 00:22:21.148 "name": "pt2", 00:22:21.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:21.148 "is_configured": true, 00:22:21.148 "data_offset": 2048, 00:22:21.148 "data_size": 63488 00:22:21.148 }, 00:22:21.148 { 00:22:21.148 "name": "pt3", 00:22:21.148 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:21.148 "is_configured": true, 00:22:21.148 "data_offset": 2048, 00:22:21.148 "data_size": 63488 00:22:21.148 }, 00:22:21.148 { 00:22:21.148 "name": null, 00:22:21.148 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:21.149 "is_configured": false, 00:22:21.149 "data_offset": 2048, 00:22:21.149 "data_size": 63488 00:22:21.149 } 00:22:21.149 ] 00:22:21.149 }' 00:22:21.149 13:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:21.149 13:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.717 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:22:21.717 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:22:21.717 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:22:21.717 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:21.975 [2024-07-25 13:22:32.353653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:21.975 [2024-07-25 13:22:32.353700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.975 [2024-07-25 13:22:32.353717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b978b0 00:22:21.975 [2024-07-25 13:22:32.353728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.975 [2024-07-25 13:22:32.354052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.976 [2024-07-25 13:22:32.354069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:21.976 [2024-07-25 13:22:32.354130] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:21.976 [2024-07-25 13:22:32.354156] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:21.976 [2024-07-25 13:22:32.354260] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b9a7b0 00:22:21.976 [2024-07-25 13:22:32.354270] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:21.976 [2024-07-25 13:22:32.354426] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b99290 00:22:21.976 [2024-07-25 13:22:32.354546] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b9a7b0 00:22:21.976 [2024-07-25 13:22:32.354555] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b9a7b0 00:22:21.976 [2024-07-25 13:22:32.354644] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.976 pt4 00:22:21.976 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:21.976 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:21.976 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:21.976 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:21.976 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:21.976 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:21.976 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.976 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.976 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.976 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.976 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.976 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.234 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:22.234 "name": "raid_bdev1", 00:22:22.234 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:22.234 "strip_size_kb": 0, 00:22:22.234 "state": "online", 00:22:22.234 "raid_level": "raid1", 00:22:22.234 "superblock": true, 00:22:22.234 "num_base_bdevs": 4, 00:22:22.234 "num_base_bdevs_discovered": 3, 00:22:22.234 "num_base_bdevs_operational": 3, 00:22:22.234 "base_bdevs_list": [ 00:22:22.234 { 00:22:22.234 "name": null, 00:22:22.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.234 "is_configured": false, 00:22:22.234 "data_offset": 2048, 00:22:22.234 "data_size": 63488 00:22:22.234 }, 00:22:22.234 { 00:22:22.234 "name": "pt2", 00:22:22.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:22.234 "is_configured": true, 00:22:22.234 "data_offset": 2048, 00:22:22.234 "data_size": 63488 00:22:22.234 }, 00:22:22.234 { 00:22:22.234 "name": "pt3", 00:22:22.234 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:22.234 "is_configured": true, 00:22:22.235 "data_offset": 2048, 00:22:22.235 "data_size": 63488 00:22:22.235 }, 00:22:22.235 { 00:22:22.235 "name": "pt4", 00:22:22.235 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:22.235 "is_configured": true, 00:22:22.235 "data_offset": 2048, 00:22:22.235 "data_size": 63488 00:22:22.235 } 00:22:22.235 ] 00:22:22.235 }' 00:22:22.235 13:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:22.235 13:22:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.803 13:22:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:23.061 [2024-07-25 13:22:33.392377] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:23.061 [2024-07-25 13:22:33.392402] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:23.061 [2024-07-25 13:22:33.392453] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:23.061 [2024-07-25 13:22:33.392517] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:23.061 [2024-07-25 13:22:33.392532] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b9a7b0 name raid_bdev1, state offline 00:22:23.061 13:22:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.061 13:22:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:22:23.320 13:22:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:22:23.320 13:22:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:22:23.320 13:22:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:22:23.320 13:22:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:22:23.320 13:22:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:23.579 13:22:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:23.838 [2024-07-25 13:22:34.082158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:23.838 [2024-07-25 13:22:34.082202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.838 [2024-07-25 13:22:34.082218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b93aa0 00:22:23.838 [2024-07-25 13:22:34.082230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.838 [2024-07-25 13:22:34.083736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.838 [2024-07-25 13:22:34.083762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:23.838 [2024-07-25 13:22:34.083822] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:23.838 [2024-07-25 13:22:34.083847] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:23.839 [2024-07-25 13:22:34.083934] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:23.839 [2024-07-25 13:22:34.083946] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:23.839 [2024-07-25 13:22:34.083959] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b92cd0 name raid_bdev1, state configuring 00:22:23.839 [2024-07-25 13:22:34.083980] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:23.839 [2024-07-25 13:22:34.084052] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:23.839 pt1 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.839 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.098 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:24.098 "name": "raid_bdev1", 00:22:24.098 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:24.098 "strip_size_kb": 0, 00:22:24.098 "state": "configuring", 00:22:24.098 "raid_level": "raid1", 00:22:24.098 "superblock": true, 00:22:24.098 "num_base_bdevs": 4, 00:22:24.098 "num_base_bdevs_discovered": 2, 00:22:24.098 "num_base_bdevs_operational": 3, 00:22:24.098 "base_bdevs_list": [ 00:22:24.098 { 00:22:24.098 "name": null, 00:22:24.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.098 "is_configured": false, 00:22:24.098 "data_offset": 2048, 00:22:24.098 "data_size": 63488 00:22:24.098 }, 00:22:24.098 { 00:22:24.098 "name": "pt2", 00:22:24.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:24.098 "is_configured": true, 00:22:24.098 "data_offset": 2048, 00:22:24.098 "data_size": 63488 00:22:24.098 }, 00:22:24.098 { 00:22:24.098 "name": "pt3", 00:22:24.098 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:24.098 "is_configured": true, 00:22:24.098 "data_offset": 2048, 00:22:24.098 "data_size": 63488 00:22:24.098 }, 00:22:24.098 { 00:22:24.098 "name": null, 00:22:24.098 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:24.098 "is_configured": false, 00:22:24.098 "data_offset": 2048, 00:22:24.098 "data_size": 63488 00:22:24.098 } 00:22:24.098 ] 00:22:24.098 }' 00:22:24.098 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:24.098 13:22:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.666 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:22:24.666 13:22:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:24.666 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:22:24.666 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:24.925 [2024-07-25 13:22:35.341486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:24.925 [2024-07-25 13:22:35.341538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.925 [2024-07-25 13:22:35.341555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b99380 00:22:24.925 [2024-07-25 13:22:35.341567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.925 [2024-07-25 13:22:35.341898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.925 [2024-07-25 13:22:35.341916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:24.925 [2024-07-25 13:22:35.341974] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:24.925 [2024-07-25 13:22:35.341992] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:24.925 [2024-07-25 13:22:35.342097] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b9a030 00:22:24.925 [2024-07-25 13:22:35.342107] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:24.925 [2024-07-25 13:22:35.342296] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b9a320 00:22:24.925 [2024-07-25 13:22:35.342427] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b9a030 00:22:24.925 [2024-07-25 13:22:35.342437] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b9a030 00:22:24.925 [2024-07-25 13:22:35.342531] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.925 pt4 00:22:24.925 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:24.925 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:24.925 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:24.925 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:24.925 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:24.925 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:24.925 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:24.925 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:24.925 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:24.925 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:24.925 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.925 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.184 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.184 "name": "raid_bdev1", 00:22:25.184 "uuid": "4926dc79-ac10-47a9-af1c-8749df5b9fed", 00:22:25.184 "strip_size_kb": 0, 00:22:25.184 "state": "online", 00:22:25.184 "raid_level": "raid1", 00:22:25.184 "superblock": true, 00:22:25.184 "num_base_bdevs": 4, 00:22:25.184 "num_base_bdevs_discovered": 3, 00:22:25.184 "num_base_bdevs_operational": 3, 00:22:25.184 "base_bdevs_list": [ 00:22:25.184 { 00:22:25.184 "name": null, 00:22:25.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.184 "is_configured": false, 00:22:25.184 "data_offset": 2048, 00:22:25.184 "data_size": 63488 00:22:25.184 }, 00:22:25.185 { 00:22:25.185 "name": "pt2", 00:22:25.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:25.185 "is_configured": true, 00:22:25.185 "data_offset": 2048, 00:22:25.185 "data_size": 63488 00:22:25.185 }, 00:22:25.185 { 00:22:25.185 "name": "pt3", 00:22:25.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:25.185 "is_configured": true, 00:22:25.185 "data_offset": 2048, 00:22:25.185 "data_size": 63488 00:22:25.185 }, 00:22:25.185 { 00:22:25.185 "name": "pt4", 00:22:25.185 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:25.185 "is_configured": true, 00:22:25.185 "data_offset": 2048, 00:22:25.185 "data_size": 63488 00:22:25.185 } 00:22:25.185 ] 00:22:25.185 }' 00:22:25.185 13:22:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.185 13:22:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.752 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:22:25.752 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:26.011 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:22:26.011 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:26.011 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:22:26.270 [2024-07-25 13:22:36.589035] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:26.270 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 4926dc79-ac10-47a9-af1c-8749df5b9fed '!=' 4926dc79-ac10-47a9-af1c-8749df5b9fed ']' 00:22:26.270 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 947409 00:22:26.270 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 947409 ']' 00:22:26.270 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 947409 00:22:26.270 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:22:26.270 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.270 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 947409 00:22:26.270 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:26.270 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:26.270 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 947409' 00:22:26.270 killing process with pid 947409 00:22:26.270 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 947409 00:22:26.270 [2024-07-25 13:22:36.667816] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:26.270 [2024-07-25 13:22:36.667869] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:26.270 [2024-07-25 13:22:36.667928] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:26.270 [2024-07-25 13:22:36.667945] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b9a030 name raid_bdev1, state offline 00:22:26.270 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 947409 00:22:26.270 [2024-07-25 13:22:36.699825] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:26.530 13:22:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:22:26.530 00:22:26.530 real 0m26.072s 00:22:26.530 user 0m47.821s 00:22:26.530 sys 0m4.523s 00:22:26.530 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:26.530 13:22:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.530 ************************************ 00:22:26.530 END TEST raid_superblock_test 00:22:26.530 ************************************ 00:22:26.530 13:22:36 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:22:26.530 13:22:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:26.530 13:22:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:26.530 13:22:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:26.530 ************************************ 00:22:26.530 START TEST raid_read_error_test 00:22:26.530 ************************************ 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.W1e3Tejb7g 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=952412 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 952412 /var/tmp/spdk-raid.sock 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 952412 ']' 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:26.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.530 13:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.789 [2024-07-25 13:22:37.054205] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:22:26.790 [2024-07-25 13:22:37.054264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952412 ] 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:26.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:26.790 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:26.790 [2024-07-25 13:22:37.185524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.790 [2024-07-25 13:22:37.272689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.049 [2024-07-25 13:22:37.337256] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.049 [2024-07-25 13:22:37.337292] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.617 13:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.617 13:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:22:27.617 13:22:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:27.617 13:22:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:27.875 BaseBdev1_malloc 00:22:27.875 13:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:28.134 true 00:22:28.134 13:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:28.134 [2024-07-25 13:22:38.602323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:28.134 [2024-07-25 13:22:38.602364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.134 [2024-07-25 13:22:38.602382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26b61d0 00:22:28.134 [2024-07-25 13:22:38.602393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.134 [2024-07-25 13:22:38.603969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.134 [2024-07-25 13:22:38.603997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:28.134 BaseBdev1 00:22:28.134 13:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:28.134 13:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:28.393 BaseBdev2_malloc 00:22:28.393 13:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:28.652 true 00:22:28.652 13:22:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:28.911 [2024-07-25 13:22:39.276442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:28.911 [2024-07-25 13:22:39.276479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.911 [2024-07-25 13:22:39.276497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26b9710 00:22:28.911 [2024-07-25 13:22:39.276508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.911 [2024-07-25 13:22:39.277838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.911 [2024-07-25 13:22:39.277864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:28.911 BaseBdev2 00:22:28.911 13:22:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:28.911 13:22:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:29.170 BaseBdev3_malloc 00:22:29.170 13:22:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:29.429 true 00:22:29.429 13:22:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:29.687 [2024-07-25 13:22:39.962442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:29.687 [2024-07-25 13:22:39.962481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.687 [2024-07-25 13:22:39.962501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26bbde0 00:22:29.687 [2024-07-25 13:22:39.962512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.687 [2024-07-25 13:22:39.963890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.687 [2024-07-25 13:22:39.963917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:29.687 BaseBdev3 00:22:29.687 13:22:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:29.688 13:22:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:29.946 BaseBdev4_malloc 00:22:29.946 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:29.946 true 00:22:30.205 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:30.205 [2024-07-25 13:22:40.648595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:30.205 [2024-07-25 13:22:40.648633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.205 [2024-07-25 13:22:40.648654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26be130 00:22:30.205 [2024-07-25 13:22:40.648665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.205 [2024-07-25 13:22:40.650046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.205 [2024-07-25 13:22:40.650073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:30.205 BaseBdev4 00:22:30.205 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:30.464 [2024-07-25 13:22:40.873217] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:30.464 [2024-07-25 13:22:40.874397] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:30.464 [2024-07-25 13:22:40.874461] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:30.464 [2024-07-25 13:22:40.874514] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:30.464 [2024-07-25 13:22:40.874714] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x26c0790 00:22:30.464 [2024-07-25 13:22:40.874724] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:30.464 [2024-07-25 13:22:40.874909] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26c3a20 00:22:30.464 [2024-07-25 13:22:40.875050] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26c0790 00:22:30.464 [2024-07-25 13:22:40.875059] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26c0790 00:22:30.464 [2024-07-25 13:22:40.875178] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.464 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:30.464 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:30.464 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:30.464 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:30.464 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:30.464 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:30.464 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.464 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.464 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.464 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.464 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.464 13:22:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.722 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.722 "name": "raid_bdev1", 00:22:30.722 "uuid": "cba2cd20-fd54-4678-be76-08e4c77d6f9a", 00:22:30.722 "strip_size_kb": 0, 00:22:30.722 "state": "online", 00:22:30.722 "raid_level": "raid1", 00:22:30.722 "superblock": true, 00:22:30.722 "num_base_bdevs": 4, 00:22:30.722 "num_base_bdevs_discovered": 4, 00:22:30.723 "num_base_bdevs_operational": 4, 00:22:30.723 "base_bdevs_list": [ 00:22:30.723 { 00:22:30.723 "name": "BaseBdev1", 00:22:30.723 "uuid": "9a5a8f61-060d-5b01-a475-1ecdc466638b", 00:22:30.723 "is_configured": true, 00:22:30.723 "data_offset": 2048, 00:22:30.723 "data_size": 63488 00:22:30.723 }, 00:22:30.723 { 00:22:30.723 "name": "BaseBdev2", 00:22:30.723 "uuid": "e1456cc6-4ba1-517e-99cb-45ff122d4bc1", 00:22:30.723 "is_configured": true, 00:22:30.723 "data_offset": 2048, 00:22:30.723 "data_size": 63488 00:22:30.723 }, 00:22:30.723 { 00:22:30.723 "name": "BaseBdev3", 00:22:30.723 "uuid": "aa9b01a5-c9ed-5d3c-8a31-55763fdb232f", 00:22:30.723 "is_configured": true, 00:22:30.723 "data_offset": 2048, 00:22:30.723 "data_size": 63488 00:22:30.723 }, 00:22:30.723 { 00:22:30.723 "name": "BaseBdev4", 00:22:30.723 "uuid": "305968c9-058a-5d98-a5e5-d6f36be68d46", 00:22:30.723 "is_configured": true, 00:22:30.723 "data_offset": 2048, 00:22:30.723 "data_size": 63488 00:22:30.723 } 00:22:30.723 ] 00:22:30.723 }' 00:22:30.723 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.723 13:22:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.319 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:22:31.319 13:22:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:31.319 [2024-07-25 13:22:41.791902] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26bffc0 00:22:32.256 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.516 13:22:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.775 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.775 "name": "raid_bdev1", 00:22:32.775 "uuid": "cba2cd20-fd54-4678-be76-08e4c77d6f9a", 00:22:32.775 "strip_size_kb": 0, 00:22:32.775 "state": "online", 00:22:32.775 "raid_level": "raid1", 00:22:32.775 "superblock": true, 00:22:32.775 "num_base_bdevs": 4, 00:22:32.775 "num_base_bdevs_discovered": 4, 00:22:32.775 "num_base_bdevs_operational": 4, 00:22:32.775 "base_bdevs_list": [ 00:22:32.775 { 00:22:32.775 "name": "BaseBdev1", 00:22:32.775 "uuid": "9a5a8f61-060d-5b01-a475-1ecdc466638b", 00:22:32.775 "is_configured": true, 00:22:32.775 "data_offset": 2048, 00:22:32.775 "data_size": 63488 00:22:32.775 }, 00:22:32.775 { 00:22:32.775 "name": "BaseBdev2", 00:22:32.775 "uuid": "e1456cc6-4ba1-517e-99cb-45ff122d4bc1", 00:22:32.775 "is_configured": true, 00:22:32.775 "data_offset": 2048, 00:22:32.775 "data_size": 63488 00:22:32.775 }, 00:22:32.775 { 00:22:32.775 "name": "BaseBdev3", 00:22:32.775 "uuid": "aa9b01a5-c9ed-5d3c-8a31-55763fdb232f", 00:22:32.775 "is_configured": true, 00:22:32.775 "data_offset": 2048, 00:22:32.775 "data_size": 63488 00:22:32.775 }, 00:22:32.775 { 00:22:32.775 "name": "BaseBdev4", 00:22:32.775 "uuid": "305968c9-058a-5d98-a5e5-d6f36be68d46", 00:22:32.775 "is_configured": true, 00:22:32.775 "data_offset": 2048, 00:22:32.775 "data_size": 63488 00:22:32.775 } 00:22:32.775 ] 00:22:32.775 }' 00:22:32.775 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.775 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.343 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:33.601 [2024-07-25 13:22:43.955934] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:33.601 [2024-07-25 13:22:43.955970] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:33.601 [2024-07-25 13:22:43.958966] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.601 [2024-07-25 13:22:43.959002] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.601 [2024-07-25 13:22:43.959105] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.601 [2024-07-25 13:22:43.959116] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26c0790 name raid_bdev1, state offline 00:22:33.601 0 00:22:33.601 13:22:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 952412 00:22:33.601 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 952412 ']' 00:22:33.601 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 952412 00:22:33.601 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:22:33.601 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.602 13:22:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 952412 00:22:33.602 13:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:33.602 13:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:33.602 13:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 952412' 00:22:33.602 killing process with pid 952412 00:22:33.602 13:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 952412 00:22:33.602 [2024-07-25 13:22:44.032634] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:33.602 13:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 952412 00:22:33.602 [2024-07-25 13:22:44.059824] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:33.861 13:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:22:33.861 13:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.W1e3Tejb7g 00:22:33.861 13:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:22:33.861 13:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:22:33.861 13:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:22:33.861 13:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:33.861 13:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:33.861 13:22:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:33.861 00:22:33.861 real 0m7.289s 00:22:33.861 user 0m11.650s 00:22:33.861 sys 0m1.250s 00:22:33.861 13:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:33.861 13:22:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.861 ************************************ 00:22:33.861 END TEST raid_read_error_test 00:22:33.861 ************************************ 00:22:33.861 13:22:44 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:22:33.861 13:22:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:33.861 13:22:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:33.861 13:22:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:34.121 ************************************ 00:22:34.121 START TEST raid_write_error_test 00:22:34.121 ************************************ 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.Bz0YwzRQJR 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=953684 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 953684 /var/tmp/spdk-raid.sock 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 953684 ']' 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:34.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.121 13:22:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.121 [2024-07-25 13:22:44.426411] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:22:34.121 [2024-07-25 13:22:44.426469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953684 ] 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.121 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:34.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.122 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:34.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.122 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:34.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.122 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:34.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.122 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:34.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.122 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:34.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.122 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:34.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.122 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:34.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.122 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:34.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.122 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:34.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.122 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:34.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.122 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:34.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.122 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:34.122 [2024-07-25 13:22:44.561319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.381 [2024-07-25 13:22:44.644271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.381 [2024-07-25 13:22:44.709468] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.381 [2024-07-25 13:22:44.709515] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.948 13:22:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:34.948 13:22:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:22:34.948 13:22:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:34.948 13:22:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:35.206 BaseBdev1_malloc 00:22:35.206 13:22:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:35.465 true 00:22:35.465 13:22:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:35.723 [2024-07-25 13:22:45.991858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:35.723 [2024-07-25 13:22:45.991899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.723 [2024-07-25 13:22:45.991915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ec61d0 00:22:35.723 [2024-07-25 13:22:45.991927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.723 [2024-07-25 13:22:45.993401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.723 [2024-07-25 13:22:45.993429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:35.723 BaseBdev1 00:22:35.723 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:35.723 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:36.289 BaseBdev2_malloc 00:22:36.289 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:36.289 true 00:22:36.289 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:36.548 [2024-07-25 13:22:46.946770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:36.548 [2024-07-25 13:22:46.946812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.548 [2024-07-25 13:22:46.946830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ec9710 00:22:36.548 [2024-07-25 13:22:46.946842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.548 [2024-07-25 13:22:46.948244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.548 [2024-07-25 13:22:46.948273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:36.548 BaseBdev2 00:22:36.548 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:36.548 13:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:37.115 BaseBdev3_malloc 00:22:37.115 13:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:37.374 true 00:22:37.374 13:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:37.941 [2024-07-25 13:22:48.174198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:37.941 [2024-07-25 13:22:48.174241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.942 [2024-07-25 13:22:48.174261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ecbde0 00:22:37.942 [2024-07-25 13:22:48.174273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.942 [2024-07-25 13:22:48.175675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.942 [2024-07-25 13:22:48.175703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:37.942 BaseBdev3 00:22:37.942 13:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:37.942 13:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:37.942 BaseBdev4_malloc 00:22:37.942 13:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:38.509 true 00:22:38.509 13:22:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:38.767 [2024-07-25 13:22:49.133350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:38.767 [2024-07-25 13:22:49.133394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.767 [2024-07-25 13:22:49.133415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ece130 00:22:38.767 [2024-07-25 13:22:49.133427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.767 [2024-07-25 13:22:49.134831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.767 [2024-07-25 13:22:49.134861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:38.767 BaseBdev4 00:22:38.767 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:39.026 [2024-07-25 13:22:49.345935] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:39.026 [2024-07-25 13:22:49.347063] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:39.026 [2024-07-25 13:22:49.347126] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:39.026 [2024-07-25 13:22:49.347187] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:39.026 [2024-07-25 13:22:49.347388] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ed0790 00:22:39.026 [2024-07-25 13:22:49.347398] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:39.026 [2024-07-25 13:22:49.347581] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ed3a20 00:22:39.026 [2024-07-25 13:22:49.347720] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ed0790 00:22:39.026 [2024-07-25 13:22:49.347729] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ed0790 00:22:39.026 [2024-07-25 13:22:49.347834] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.026 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:39.026 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:39.026 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:39.026 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:39.026 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:39.026 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:39.026 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:39.026 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:39.026 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:39.026 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:39.026 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.026 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.285 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:39.285 "name": "raid_bdev1", 00:22:39.285 "uuid": "79190f2f-e143-4668-ab5d-7ce4caf6ee8b", 00:22:39.285 "strip_size_kb": 0, 00:22:39.285 "state": "online", 00:22:39.285 "raid_level": "raid1", 00:22:39.285 "superblock": true, 00:22:39.285 "num_base_bdevs": 4, 00:22:39.285 "num_base_bdevs_discovered": 4, 00:22:39.285 "num_base_bdevs_operational": 4, 00:22:39.285 "base_bdevs_list": [ 00:22:39.285 { 00:22:39.285 "name": "BaseBdev1", 00:22:39.285 "uuid": "d7e9fcc0-6343-57eb-80fe-7280054ea421", 00:22:39.285 "is_configured": true, 00:22:39.285 "data_offset": 2048, 00:22:39.285 "data_size": 63488 00:22:39.285 }, 00:22:39.286 { 00:22:39.286 "name": "BaseBdev2", 00:22:39.286 "uuid": "00582b39-c43e-5341-ae49-4603f4634597", 00:22:39.286 "is_configured": true, 00:22:39.286 "data_offset": 2048, 00:22:39.286 "data_size": 63488 00:22:39.286 }, 00:22:39.286 { 00:22:39.286 "name": "BaseBdev3", 00:22:39.286 "uuid": "a8498b3f-8fa5-525b-9126-5dd38a019104", 00:22:39.286 "is_configured": true, 00:22:39.286 "data_offset": 2048, 00:22:39.286 "data_size": 63488 00:22:39.286 }, 00:22:39.286 { 00:22:39.286 "name": "BaseBdev4", 00:22:39.286 "uuid": "f4856418-be32-5868-918a-7d6c305b0bde", 00:22:39.286 "is_configured": true, 00:22:39.286 "data_offset": 2048, 00:22:39.286 "data_size": 63488 00:22:39.286 } 00:22:39.286 ] 00:22:39.286 }' 00:22:39.286 13:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:39.286 13:22:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.853 13:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:22:39.853 13:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:39.853 [2024-07-25 13:22:50.240552] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ecffc0 00:22:40.809 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:41.068 [2024-07-25 13:22:51.383085] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:22:41.068 [2024-07-25 13:22:51.383154] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:41.068 [2024-07-25 13:22:51.383363] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1ecffc0 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=3 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.068 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.327 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:41.327 "name": "raid_bdev1", 00:22:41.327 "uuid": "79190f2f-e143-4668-ab5d-7ce4caf6ee8b", 00:22:41.327 "strip_size_kb": 0, 00:22:41.327 "state": "online", 00:22:41.327 "raid_level": "raid1", 00:22:41.327 "superblock": true, 00:22:41.327 "num_base_bdevs": 4, 00:22:41.327 "num_base_bdevs_discovered": 3, 00:22:41.327 "num_base_bdevs_operational": 3, 00:22:41.327 "base_bdevs_list": [ 00:22:41.327 { 00:22:41.327 "name": null, 00:22:41.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.327 "is_configured": false, 00:22:41.327 "data_offset": 2048, 00:22:41.327 "data_size": 63488 00:22:41.327 }, 00:22:41.327 { 00:22:41.327 "name": "BaseBdev2", 00:22:41.327 "uuid": "00582b39-c43e-5341-ae49-4603f4634597", 00:22:41.327 "is_configured": true, 00:22:41.327 "data_offset": 2048, 00:22:41.327 "data_size": 63488 00:22:41.327 }, 00:22:41.327 { 00:22:41.327 "name": "BaseBdev3", 00:22:41.327 "uuid": "a8498b3f-8fa5-525b-9126-5dd38a019104", 00:22:41.327 "is_configured": true, 00:22:41.327 "data_offset": 2048, 00:22:41.327 "data_size": 63488 00:22:41.327 }, 00:22:41.327 { 00:22:41.327 "name": "BaseBdev4", 00:22:41.327 "uuid": "f4856418-be32-5868-918a-7d6c305b0bde", 00:22:41.327 "is_configured": true, 00:22:41.327 "data_offset": 2048, 00:22:41.327 "data_size": 63488 00:22:41.327 } 00:22:41.327 ] 00:22:41.327 }' 00:22:41.327 13:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:41.327 13:22:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.895 13:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:41.895 [2024-07-25 13:22:52.314754] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:41.895 [2024-07-25 13:22:52.314789] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:41.895 [2024-07-25 13:22:52.317729] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:41.895 [2024-07-25 13:22:52.317761] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.895 [2024-07-25 13:22:52.317851] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:41.895 [2024-07-25 13:22:52.317862] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ed0790 name raid_bdev1, state offline 00:22:41.895 0 00:22:41.895 13:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 953684 00:22:41.895 13:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 953684 ']' 00:22:41.895 13:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 953684 00:22:41.895 13:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:22:41.895 13:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:41.895 13:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 953684 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 953684' 00:22:42.155 killing process with pid 953684 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 953684 00:22:42.155 [2024-07-25 13:22:52.391827] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 953684 00:22:42.155 [2024-07-25 13:22:52.419062] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.Bz0YwzRQJR 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:42.155 00:22:42.155 real 0m8.279s 00:22:42.155 user 0m13.446s 00:22:42.155 sys 0m1.375s 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:42.155 13:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.155 ************************************ 00:22:42.155 END TEST raid_write_error_test 00:22:42.155 ************************************ 00:22:42.414 13:22:52 bdev_raid -- bdev/bdev_raid.sh@955 -- # '[' true = true ']' 00:22:42.414 13:22:52 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:22:42.414 13:22:52 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:22:42.414 13:22:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:42.414 13:22:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:42.414 13:22:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:42.414 ************************************ 00:22:42.414 START TEST raid_rebuild_test 00:22:42.414 ************************************ 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=955257 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 955257 /var/tmp/spdk-raid.sock 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 955257 ']' 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:42.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:42.414 13:22:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.414 [2024-07-25 13:22:52.784133] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:22:42.414 [2024-07-25 13:22:52.784200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955257 ] 00:22:42.414 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:42.414 Zero copy mechanism will not be used. 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.414 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:42.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.415 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:42.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.415 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:42.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.415 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:42.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.415 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:42.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.415 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:42.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:42.415 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:42.673 [2024-07-25 13:22:52.916566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.673 [2024-07-25 13:22:53.006110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.673 [2024-07-25 13:22:53.064136] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.673 [2024-07-25 13:22:53.064172] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:43.241 13:22:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:43.241 13:22:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:22:43.241 13:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:43.241 13:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:43.499 BaseBdev1_malloc 00:22:43.499 13:22:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:43.758 [2024-07-25 13:22:54.123910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:43.758 [2024-07-25 13:22:54.123956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.758 [2024-07-25 13:22:54.123976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd215f0 00:22:43.758 [2024-07-25 13:22:54.123987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.758 [2024-07-25 13:22:54.125446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.758 [2024-07-25 13:22:54.125474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:43.758 BaseBdev1 00:22:43.758 13:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:43.758 13:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:44.017 BaseBdev2_malloc 00:22:44.017 13:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:44.277 [2024-07-25 13:22:54.581479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:44.277 [2024-07-25 13:22:54.581518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.277 [2024-07-25 13:22:54.581535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xec4fd0 00:22:44.277 [2024-07-25 13:22:54.581546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.277 [2024-07-25 13:22:54.582854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.277 [2024-07-25 13:22:54.582880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:44.277 BaseBdev2 00:22:44.277 13:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:44.599 spare_malloc 00:22:44.599 13:22:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:44.599 spare_delay 00:22:44.599 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:44.858 [2024-07-25 13:22:55.271477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:44.858 [2024-07-25 13:22:55.271512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.858 [2024-07-25 13:22:55.271528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeb9340 00:22:44.858 [2024-07-25 13:22:55.271540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.858 [2024-07-25 13:22:55.272819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.858 [2024-07-25 13:22:55.272845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:44.858 spare 00:22:44.858 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:45.117 [2024-07-25 13:22:55.496080] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:45.117 [2024-07-25 13:22:55.497215] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:45.117 [2024-07-25 13:22:55.497281] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd19290 00:22:45.117 [2024-07-25 13:22:55.497290] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:45.117 [2024-07-25 13:22:55.497469] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd1bde0 00:22:45.117 [2024-07-25 13:22:55.497589] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd19290 00:22:45.117 [2024-07-25 13:22:55.497598] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd19290 00:22:45.117 [2024-07-25 13:22:55.497694] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.117 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:45.117 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:45.117 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:45.117 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:45.117 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:45.117 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:45.117 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:45.117 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:45.117 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:45.117 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:45.117 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.117 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.376 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.376 "name": "raid_bdev1", 00:22:45.376 "uuid": "6575cd95-567e-4de4-9234-b333f95a7b51", 00:22:45.376 "strip_size_kb": 0, 00:22:45.376 "state": "online", 00:22:45.376 "raid_level": "raid1", 00:22:45.376 "superblock": false, 00:22:45.376 "num_base_bdevs": 2, 00:22:45.376 "num_base_bdevs_discovered": 2, 00:22:45.376 "num_base_bdevs_operational": 2, 00:22:45.376 "base_bdevs_list": [ 00:22:45.376 { 00:22:45.376 "name": "BaseBdev1", 00:22:45.376 "uuid": "b9dfec72-6960-5678-9217-1ef74852dadf", 00:22:45.376 "is_configured": true, 00:22:45.376 "data_offset": 0, 00:22:45.376 "data_size": 65536 00:22:45.376 }, 00:22:45.376 { 00:22:45.376 "name": "BaseBdev2", 00:22:45.376 "uuid": "e64ebfe2-8451-5c73-9aa6-403b6f2aedc8", 00:22:45.376 "is_configured": true, 00:22:45.376 "data_offset": 0, 00:22:45.376 "data_size": 65536 00:22:45.376 } 00:22:45.376 ] 00:22:45.376 }' 00:22:45.376 13:22:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.376 13:22:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.942 13:22:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:45.942 13:22:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:22:46.201 [2024-07-25 13:22:56.502927] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:46.201 13:22:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:22:46.201 13:22:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.201 13:22:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:46.460 13:22:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:46.719 [2024-07-25 13:22:56.959939] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd1bde0 00:22:46.719 /dev/nbd0 00:22:46.719 13:22:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:46.719 1+0 records in 00:22:46.719 1+0 records out 00:22:46.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255569 s, 16.0 MB/s 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:22:46.719 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:53.287 65536+0 records in 00:22:53.287 65536+0 records out 00:22:53.287 33554432 bytes (34 MB, 32 MiB) copied, 5.50467 s, 6.1 MB/s 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:53.287 [2024-07-25 13:23:02.781801] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:53.287 13:23:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:53.287 [2024-07-25 13:23:03.002471] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.287 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:53.287 "name": "raid_bdev1", 00:22:53.287 "uuid": "6575cd95-567e-4de4-9234-b333f95a7b51", 00:22:53.287 "strip_size_kb": 0, 00:22:53.287 "state": "online", 00:22:53.287 "raid_level": "raid1", 00:22:53.287 "superblock": false, 00:22:53.288 "num_base_bdevs": 2, 00:22:53.288 "num_base_bdevs_discovered": 1, 00:22:53.288 "num_base_bdevs_operational": 1, 00:22:53.288 "base_bdevs_list": [ 00:22:53.288 { 00:22:53.288 "name": null, 00:22:53.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.288 "is_configured": false, 00:22:53.288 "data_offset": 0, 00:22:53.288 "data_size": 65536 00:22:53.288 }, 00:22:53.288 { 00:22:53.288 "name": "BaseBdev2", 00:22:53.288 "uuid": "e64ebfe2-8451-5c73-9aa6-403b6f2aedc8", 00:22:53.288 "is_configured": true, 00:22:53.288 "data_offset": 0, 00:22:53.288 "data_size": 65536 00:22:53.288 } 00:22:53.288 ] 00:22:53.288 }' 00:22:53.288 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:53.288 13:23:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.546 13:23:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:53.805 [2024-07-25 13:23:04.045224] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.805 [2024-07-25 13:23:04.049923] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd1da20 00:22:53.805 [2024-07-25 13:23:04.051946] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.805 13:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:54.741 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.741 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:54.741 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:54.741 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:54.741 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:54.741 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.741 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.000 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:55.000 "name": "raid_bdev1", 00:22:55.000 "uuid": "6575cd95-567e-4de4-9234-b333f95a7b51", 00:22:55.000 "strip_size_kb": 0, 00:22:55.000 "state": "online", 00:22:55.000 "raid_level": "raid1", 00:22:55.000 "superblock": false, 00:22:55.000 "num_base_bdevs": 2, 00:22:55.000 "num_base_bdevs_discovered": 2, 00:22:55.000 "num_base_bdevs_operational": 2, 00:22:55.000 "process": { 00:22:55.000 "type": "rebuild", 00:22:55.000 "target": "spare", 00:22:55.000 "progress": { 00:22:55.000 "blocks": 24576, 00:22:55.000 "percent": 37 00:22:55.000 } 00:22:55.000 }, 00:22:55.000 "base_bdevs_list": [ 00:22:55.000 { 00:22:55.000 "name": "spare", 00:22:55.000 "uuid": "20e39879-a704-5c27-9b45-cba941930484", 00:22:55.000 "is_configured": true, 00:22:55.000 "data_offset": 0, 00:22:55.000 "data_size": 65536 00:22:55.000 }, 00:22:55.000 { 00:22:55.000 "name": "BaseBdev2", 00:22:55.000 "uuid": "e64ebfe2-8451-5c73-9aa6-403b6f2aedc8", 00:22:55.000 "is_configured": true, 00:22:55.000 "data_offset": 0, 00:22:55.000 "data_size": 65536 00:22:55.000 } 00:22:55.000 ] 00:22:55.000 }' 00:22:55.000 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:55.000 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:55.000 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:55.000 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.000 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:55.259 [2024-07-25 13:23:05.602598] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:55.259 [2024-07-25 13:23:05.663598] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:55.259 [2024-07-25 13:23:05.663643] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.259 [2024-07-25 13:23:05.663658] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:55.259 [2024-07-25 13:23:05.663666] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:55.259 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:55.259 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:55.259 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:55.259 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:55.259 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:55.259 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:55.259 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:55.259 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:55.259 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:55.259 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:55.259 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.259 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.518 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:55.518 "name": "raid_bdev1", 00:22:55.518 "uuid": "6575cd95-567e-4de4-9234-b333f95a7b51", 00:22:55.518 "strip_size_kb": 0, 00:22:55.518 "state": "online", 00:22:55.518 "raid_level": "raid1", 00:22:55.518 "superblock": false, 00:22:55.518 "num_base_bdevs": 2, 00:22:55.518 "num_base_bdevs_discovered": 1, 00:22:55.518 "num_base_bdevs_operational": 1, 00:22:55.518 "base_bdevs_list": [ 00:22:55.518 { 00:22:55.518 "name": null, 00:22:55.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.518 "is_configured": false, 00:22:55.518 "data_offset": 0, 00:22:55.518 "data_size": 65536 00:22:55.518 }, 00:22:55.518 { 00:22:55.518 "name": "BaseBdev2", 00:22:55.518 "uuid": "e64ebfe2-8451-5c73-9aa6-403b6f2aedc8", 00:22:55.518 "is_configured": true, 00:22:55.518 "data_offset": 0, 00:22:55.518 "data_size": 65536 00:22:55.518 } 00:22:55.518 ] 00:22:55.518 }' 00:22:55.518 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:55.518 13:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.087 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:56.087 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:56.087 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:56.087 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:56.087 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:56.087 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.087 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.346 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:56.346 "name": "raid_bdev1", 00:22:56.346 "uuid": "6575cd95-567e-4de4-9234-b333f95a7b51", 00:22:56.346 "strip_size_kb": 0, 00:22:56.346 "state": "online", 00:22:56.346 "raid_level": "raid1", 00:22:56.346 "superblock": false, 00:22:56.346 "num_base_bdevs": 2, 00:22:56.346 "num_base_bdevs_discovered": 1, 00:22:56.346 "num_base_bdevs_operational": 1, 00:22:56.346 "base_bdevs_list": [ 00:22:56.346 { 00:22:56.346 "name": null, 00:22:56.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.346 "is_configured": false, 00:22:56.346 "data_offset": 0, 00:22:56.346 "data_size": 65536 00:22:56.346 }, 00:22:56.346 { 00:22:56.346 "name": "BaseBdev2", 00:22:56.346 "uuid": "e64ebfe2-8451-5c73-9aa6-403b6f2aedc8", 00:22:56.346 "is_configured": true, 00:22:56.346 "data_offset": 0, 00:22:56.346 "data_size": 65536 00:22:56.346 } 00:22:56.346 ] 00:22:56.346 }' 00:22:56.346 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:56.346 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:56.346 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:56.346 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:56.346 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:56.605 [2024-07-25 13:23:07.003398] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.605 [2024-07-25 13:23:07.008090] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd1da20 00:22:56.605 [2024-07-25 13:23:07.009450] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:56.605 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:22:57.541 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.541 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:57.541 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:57.541 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:57.541 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:57.800 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.800 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.800 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:57.800 "name": "raid_bdev1", 00:22:57.800 "uuid": "6575cd95-567e-4de4-9234-b333f95a7b51", 00:22:57.800 "strip_size_kb": 0, 00:22:57.800 "state": "online", 00:22:57.800 "raid_level": "raid1", 00:22:57.800 "superblock": false, 00:22:57.800 "num_base_bdevs": 2, 00:22:57.800 "num_base_bdevs_discovered": 2, 00:22:57.800 "num_base_bdevs_operational": 2, 00:22:57.800 "process": { 00:22:57.800 "type": "rebuild", 00:22:57.800 "target": "spare", 00:22:57.800 "progress": { 00:22:57.800 "blocks": 24576, 00:22:57.800 "percent": 37 00:22:57.800 } 00:22:57.800 }, 00:22:57.800 "base_bdevs_list": [ 00:22:57.800 { 00:22:57.800 "name": "spare", 00:22:57.800 "uuid": "20e39879-a704-5c27-9b45-cba941930484", 00:22:57.800 "is_configured": true, 00:22:57.800 "data_offset": 0, 00:22:57.800 "data_size": 65536 00:22:57.800 }, 00:22:57.800 { 00:22:57.800 "name": "BaseBdev2", 00:22:57.800 "uuid": "e64ebfe2-8451-5c73-9aa6-403b6f2aedc8", 00:22:57.800 "is_configured": true, 00:22:57.800 "data_offset": 0, 00:22:57.800 "data_size": 65536 00:22:57.800 } 00:22:57.800 ] 00:22:57.800 }' 00:22:57.800 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=743 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.058 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.317 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:58.317 "name": "raid_bdev1", 00:22:58.317 "uuid": "6575cd95-567e-4de4-9234-b333f95a7b51", 00:22:58.317 "strip_size_kb": 0, 00:22:58.317 "state": "online", 00:22:58.317 "raid_level": "raid1", 00:22:58.317 "superblock": false, 00:22:58.317 "num_base_bdevs": 2, 00:22:58.317 "num_base_bdevs_discovered": 2, 00:22:58.317 "num_base_bdevs_operational": 2, 00:22:58.317 "process": { 00:22:58.317 "type": "rebuild", 00:22:58.317 "target": "spare", 00:22:58.317 "progress": { 00:22:58.317 "blocks": 30720, 00:22:58.317 "percent": 46 00:22:58.317 } 00:22:58.317 }, 00:22:58.317 "base_bdevs_list": [ 00:22:58.317 { 00:22:58.317 "name": "spare", 00:22:58.317 "uuid": "20e39879-a704-5c27-9b45-cba941930484", 00:22:58.317 "is_configured": true, 00:22:58.317 "data_offset": 0, 00:22:58.317 "data_size": 65536 00:22:58.317 }, 00:22:58.317 { 00:22:58.317 "name": "BaseBdev2", 00:22:58.317 "uuid": "e64ebfe2-8451-5c73-9aa6-403b6f2aedc8", 00:22:58.317 "is_configured": true, 00:22:58.317 "data_offset": 0, 00:22:58.317 "data_size": 65536 00:22:58.317 } 00:22:58.317 ] 00:22:58.317 }' 00:22:58.317 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:58.317 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.317 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:58.317 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.317 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:59.255 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:59.255 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.255 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:59.255 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:59.255 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:59.255 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:59.255 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.255 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.514 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:59.514 "name": "raid_bdev1", 00:22:59.514 "uuid": "6575cd95-567e-4de4-9234-b333f95a7b51", 00:22:59.514 "strip_size_kb": 0, 00:22:59.514 "state": "online", 00:22:59.514 "raid_level": "raid1", 00:22:59.514 "superblock": false, 00:22:59.514 "num_base_bdevs": 2, 00:22:59.514 "num_base_bdevs_discovered": 2, 00:22:59.514 "num_base_bdevs_operational": 2, 00:22:59.514 "process": { 00:22:59.514 "type": "rebuild", 00:22:59.514 "target": "spare", 00:22:59.515 "progress": { 00:22:59.515 "blocks": 57344, 00:22:59.515 "percent": 87 00:22:59.515 } 00:22:59.515 }, 00:22:59.515 "base_bdevs_list": [ 00:22:59.515 { 00:22:59.515 "name": "spare", 00:22:59.515 "uuid": "20e39879-a704-5c27-9b45-cba941930484", 00:22:59.515 "is_configured": true, 00:22:59.515 "data_offset": 0, 00:22:59.515 "data_size": 65536 00:22:59.515 }, 00:22:59.515 { 00:22:59.515 "name": "BaseBdev2", 00:22:59.515 "uuid": "e64ebfe2-8451-5c73-9aa6-403b6f2aedc8", 00:22:59.515 "is_configured": true, 00:22:59.515 "data_offset": 0, 00:22:59.515 "data_size": 65536 00:22:59.515 } 00:22:59.515 ] 00:22:59.515 }' 00:22:59.515 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:59.515 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:59.515 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:59.515 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:59.515 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:59.774 [2024-07-25 13:23:10.232597] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:59.774 [2024-07-25 13:23:10.232667] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:59.774 [2024-07-25 13:23:10.232705] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.712 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:00.712 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:00.712 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:00.712 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:00.712 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:00.712 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:00.712 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.712 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:00.971 "name": "raid_bdev1", 00:23:00.971 "uuid": "6575cd95-567e-4de4-9234-b333f95a7b51", 00:23:00.971 "strip_size_kb": 0, 00:23:00.971 "state": "online", 00:23:00.971 "raid_level": "raid1", 00:23:00.971 "superblock": false, 00:23:00.971 "num_base_bdevs": 2, 00:23:00.971 "num_base_bdevs_discovered": 2, 00:23:00.971 "num_base_bdevs_operational": 2, 00:23:00.971 "base_bdevs_list": [ 00:23:00.971 { 00:23:00.971 "name": "spare", 00:23:00.971 "uuid": "20e39879-a704-5c27-9b45-cba941930484", 00:23:00.971 "is_configured": true, 00:23:00.971 "data_offset": 0, 00:23:00.971 "data_size": 65536 00:23:00.971 }, 00:23:00.971 { 00:23:00.971 "name": "BaseBdev2", 00:23:00.971 "uuid": "e64ebfe2-8451-5c73-9aa6-403b6f2aedc8", 00:23:00.971 "is_configured": true, 00:23:00.971 "data_offset": 0, 00:23:00.971 "data_size": 65536 00:23:00.971 } 00:23:00.971 ] 00:23:00.971 }' 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.971 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:01.231 "name": "raid_bdev1", 00:23:01.231 "uuid": "6575cd95-567e-4de4-9234-b333f95a7b51", 00:23:01.231 "strip_size_kb": 0, 00:23:01.231 "state": "online", 00:23:01.231 "raid_level": "raid1", 00:23:01.231 "superblock": false, 00:23:01.231 "num_base_bdevs": 2, 00:23:01.231 "num_base_bdevs_discovered": 2, 00:23:01.231 "num_base_bdevs_operational": 2, 00:23:01.231 "base_bdevs_list": [ 00:23:01.231 { 00:23:01.231 "name": "spare", 00:23:01.231 "uuid": "20e39879-a704-5c27-9b45-cba941930484", 00:23:01.231 "is_configured": true, 00:23:01.231 "data_offset": 0, 00:23:01.231 "data_size": 65536 00:23:01.231 }, 00:23:01.231 { 00:23:01.231 "name": "BaseBdev2", 00:23:01.231 "uuid": "e64ebfe2-8451-5c73-9aa6-403b6f2aedc8", 00:23:01.231 "is_configured": true, 00:23:01.231 "data_offset": 0, 00:23:01.231 "data_size": 65536 00:23:01.231 } 00:23:01.231 ] 00:23:01.231 }' 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.231 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.490 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:01.490 "name": "raid_bdev1", 00:23:01.490 "uuid": "6575cd95-567e-4de4-9234-b333f95a7b51", 00:23:01.490 "strip_size_kb": 0, 00:23:01.490 "state": "online", 00:23:01.490 "raid_level": "raid1", 00:23:01.490 "superblock": false, 00:23:01.490 "num_base_bdevs": 2, 00:23:01.490 "num_base_bdevs_discovered": 2, 00:23:01.490 "num_base_bdevs_operational": 2, 00:23:01.490 "base_bdevs_list": [ 00:23:01.490 { 00:23:01.490 "name": "spare", 00:23:01.490 "uuid": "20e39879-a704-5c27-9b45-cba941930484", 00:23:01.490 "is_configured": true, 00:23:01.490 "data_offset": 0, 00:23:01.490 "data_size": 65536 00:23:01.490 }, 00:23:01.490 { 00:23:01.490 "name": "BaseBdev2", 00:23:01.490 "uuid": "e64ebfe2-8451-5c73-9aa6-403b6f2aedc8", 00:23:01.490 "is_configured": true, 00:23:01.490 "data_offset": 0, 00:23:01.490 "data_size": 65536 00:23:01.490 } 00:23:01.490 ] 00:23:01.490 }' 00:23:01.490 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:01.490 13:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.142 13:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:02.400 [2024-07-25 13:23:12.647367] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:02.400 [2024-07-25 13:23:12.647395] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:02.401 [2024-07-25 13:23:12.647451] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:02.401 [2024-07-25 13:23:12.647503] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:02.401 [2024-07-25 13:23:12.647514] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd19290 name raid_bdev1, state offline 00:23:02.401 13:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.401 13:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:02.969 /dev/nbd0 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:02.969 1+0 records in 00:23:02.969 1+0 records out 00:23:02.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261308 s, 15.7 MB/s 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:02.969 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:03.229 /dev/nbd1 00:23:03.229 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:03.229 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:03.229 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:03.229 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:03.229 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:03.229 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:03.229 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:03.229 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:03.229 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:03.229 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:03.229 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:03.229 1+0 records in 00:23:03.229 1+0 records out 00:23:03.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257439 s, 15.9 MB/s 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:03.488 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:03.747 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:03.747 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:03.747 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:03.747 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:03.747 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:03.747 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:03.747 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:03.747 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:03.747 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:03.747 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 955257 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 955257 ']' 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 955257 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 955257 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 955257' 00:23:04.007 killing process with pid 955257 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 955257 00:23:04.007 Received shutdown signal, test time was about 60.000000 seconds 00:23:04.007 00:23:04.007 Latency(us) 00:23:04.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.007 =================================================================================================================== 00:23:04.007 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:04.007 [2024-07-25 13:23:14.361191] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:04.007 13:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 955257 00:23:04.007 [2024-07-25 13:23:14.383965] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:23:04.267 00:23:04.267 real 0m21.861s 00:23:04.267 user 0m28.917s 00:23:04.267 sys 0m5.102s 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.267 ************************************ 00:23:04.267 END TEST raid_rebuild_test 00:23:04.267 ************************************ 00:23:04.267 13:23:14 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:23:04.267 13:23:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:23:04.267 13:23:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:04.267 13:23:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:04.267 ************************************ 00:23:04.267 START TEST raid_rebuild_test_sb 00:23:04.267 ************************************ 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=959178 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 959178 /var/tmp/spdk-raid.sock 00:23:04.267 13:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 959178 ']' 00:23:04.268 13:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:04.268 13:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:04.268 13:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:04.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:04.268 13:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:04.268 13:23:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.268 [2024-07-25 13:23:14.711659] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:23:04.268 [2024-07-25 13:23:14.711703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959178 ] 00:23:04.268 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:04.268 Zero copy mechanism will not be used. 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:04.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.527 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:04.527 [2024-07-25 13:23:14.829700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.527 [2024-07-25 13:23:14.917198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.527 [2024-07-25 13:23:14.981331] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:04.527 [2024-07-25 13:23:14.981370] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:05.467 13:23:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:05.467 13:23:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:23:05.467 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:05.467 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:05.467 BaseBdev1_malloc 00:23:05.467 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:05.727 [2024-07-25 13:23:16.065984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:05.727 [2024-07-25 13:23:16.066028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.727 [2024-07-25 13:23:16.066048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15345f0 00:23:05.727 [2024-07-25 13:23:16.066061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.727 [2024-07-25 13:23:16.067567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.727 [2024-07-25 13:23:16.067596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:05.727 BaseBdev1 00:23:05.727 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:05.727 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:05.986 BaseBdev2_malloc 00:23:05.987 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:06.245 [2024-07-25 13:23:16.511698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:06.245 [2024-07-25 13:23:16.511739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.245 [2024-07-25 13:23:16.511756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16d7fd0 00:23:06.245 [2024-07-25 13:23:16.511767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.245 [2024-07-25 13:23:16.513147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.245 [2024-07-25 13:23:16.513173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:06.245 BaseBdev2 00:23:06.245 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:06.504 spare_malloc 00:23:06.504 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:06.504 spare_delay 00:23:06.504 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:06.763 [2024-07-25 13:23:17.193675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:06.763 [2024-07-25 13:23:17.193714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.763 [2024-07-25 13:23:17.193731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16cc340 00:23:06.763 [2024-07-25 13:23:17.193743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.763 [2024-07-25 13:23:17.195137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.763 [2024-07-25 13:23:17.195170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:06.763 spare 00:23:06.763 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:07.022 [2024-07-25 13:23:17.426310] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:07.022 [2024-07-25 13:23:17.427477] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:07.022 [2024-07-25 13:23:17.427612] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x152c290 00:23:07.022 [2024-07-25 13:23:17.427624] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:07.022 [2024-07-25 13:23:17.427803] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x152ede0 00:23:07.022 [2024-07-25 13:23:17.427927] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x152c290 00:23:07.022 [2024-07-25 13:23:17.427936] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x152c290 00:23:07.022 [2024-07-25 13:23:17.428037] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.022 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:07.022 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:07.022 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:07.022 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:07.022 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:07.022 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:07.022 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.022 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.022 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.022 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.022 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.022 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.281 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.281 "name": "raid_bdev1", 00:23:07.281 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:07.281 "strip_size_kb": 0, 00:23:07.281 "state": "online", 00:23:07.281 "raid_level": "raid1", 00:23:07.281 "superblock": true, 00:23:07.281 "num_base_bdevs": 2, 00:23:07.281 "num_base_bdevs_discovered": 2, 00:23:07.281 "num_base_bdevs_operational": 2, 00:23:07.281 "base_bdevs_list": [ 00:23:07.281 { 00:23:07.281 "name": "BaseBdev1", 00:23:07.281 "uuid": "72967a98-84da-5882-a7ac-d841c6a98b10", 00:23:07.281 "is_configured": true, 00:23:07.281 "data_offset": 2048, 00:23:07.281 "data_size": 63488 00:23:07.282 }, 00:23:07.282 { 00:23:07.282 "name": "BaseBdev2", 00:23:07.282 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:07.282 "is_configured": true, 00:23:07.282 "data_offset": 2048, 00:23:07.282 "data_size": 63488 00:23:07.282 } 00:23:07.282 ] 00:23:07.282 }' 00:23:07.282 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.282 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.850 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:23:07.850 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:08.109 [2024-07-25 13:23:18.481301] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:08.109 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:23:08.109 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.109 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:08.367 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:08.626 [2024-07-25 13:23:18.942329] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x152ede0 00:23:08.626 /dev/nbd0 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:08.626 1+0 records in 00:23:08.626 1+0 records out 00:23:08.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256214 s, 16.0 MB/s 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:08.626 13:23:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:08.626 13:23:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:08.626 13:23:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:08.626 13:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:08.626 13:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:08.626 13:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:23:08.626 13:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:23:08.626 13:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:12.823 63488+0 records in 00:23:12.823 63488+0 records out 00:23:12.823 32505856 bytes (33 MB, 31 MiB) copied, 3.91768 s, 8.3 MB/s 00:23:12.823 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:12.823 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:12.823 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:12.823 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:12.823 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:12.823 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.823 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:12.823 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:12.823 [2024-07-25 13:23:23.176657] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.823 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:12.823 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:12.823 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.823 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.823 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:12.823 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:12.823 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.823 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:13.082 [2024-07-25 13:23:23.393239] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:13.082 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:13.082 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:13.082 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:13.082 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:13.082 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:13.082 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:13.082 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:13.082 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:13.082 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:13.082 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:13.082 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.082 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.341 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:13.341 "name": "raid_bdev1", 00:23:13.341 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:13.341 "strip_size_kb": 0, 00:23:13.341 "state": "online", 00:23:13.341 "raid_level": "raid1", 00:23:13.341 "superblock": true, 00:23:13.341 "num_base_bdevs": 2, 00:23:13.341 "num_base_bdevs_discovered": 1, 00:23:13.341 "num_base_bdevs_operational": 1, 00:23:13.341 "base_bdevs_list": [ 00:23:13.341 { 00:23:13.341 "name": null, 00:23:13.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.341 "is_configured": false, 00:23:13.341 "data_offset": 2048, 00:23:13.341 "data_size": 63488 00:23:13.341 }, 00:23:13.341 { 00:23:13.341 "name": "BaseBdev2", 00:23:13.341 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:13.341 "is_configured": true, 00:23:13.341 "data_offset": 2048, 00:23:13.341 "data_size": 63488 00:23:13.341 } 00:23:13.341 ] 00:23:13.341 }' 00:23:13.341 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:13.341 13:23:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.909 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:14.167 [2024-07-25 13:23:24.427972] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:14.167 [2024-07-25 13:23:24.432792] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x152f010 00:23:14.167 [2024-07-25 13:23:24.434814] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:14.167 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:15.105 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:15.105 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:15.105 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:15.105 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:15.105 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:15.105 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.105 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.364 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:15.364 "name": "raid_bdev1", 00:23:15.364 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:15.364 "strip_size_kb": 0, 00:23:15.364 "state": "online", 00:23:15.364 "raid_level": "raid1", 00:23:15.364 "superblock": true, 00:23:15.364 "num_base_bdevs": 2, 00:23:15.364 "num_base_bdevs_discovered": 2, 00:23:15.364 "num_base_bdevs_operational": 2, 00:23:15.364 "process": { 00:23:15.364 "type": "rebuild", 00:23:15.364 "target": "spare", 00:23:15.364 "progress": { 00:23:15.364 "blocks": 24576, 00:23:15.364 "percent": 38 00:23:15.364 } 00:23:15.364 }, 00:23:15.364 "base_bdevs_list": [ 00:23:15.364 { 00:23:15.364 "name": "spare", 00:23:15.364 "uuid": "7ce422ab-08a0-54b8-bfba-101b4f9862de", 00:23:15.364 "is_configured": true, 00:23:15.364 "data_offset": 2048, 00:23:15.364 "data_size": 63488 00:23:15.364 }, 00:23:15.364 { 00:23:15.364 "name": "BaseBdev2", 00:23:15.364 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:15.364 "is_configured": true, 00:23:15.364 "data_offset": 2048, 00:23:15.364 "data_size": 63488 00:23:15.364 } 00:23:15.364 ] 00:23:15.364 }' 00:23:15.364 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:15.364 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:15.364 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:15.364 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:15.364 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:15.624 [2024-07-25 13:23:25.985463] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:15.624 [2024-07-25 13:23:26.046464] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:15.624 [2024-07-25 13:23:26.046506] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.624 [2024-07-25 13:23:26.046520] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:15.624 [2024-07-25 13:23:26.046527] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:15.624 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:15.624 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:15.624 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:15.624 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:15.624 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:15.624 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:15.624 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:15.624 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:15.624 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:15.624 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:15.624 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.624 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.882 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:15.882 "name": "raid_bdev1", 00:23:15.882 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:15.882 "strip_size_kb": 0, 00:23:15.882 "state": "online", 00:23:15.882 "raid_level": "raid1", 00:23:15.882 "superblock": true, 00:23:15.882 "num_base_bdevs": 2, 00:23:15.882 "num_base_bdevs_discovered": 1, 00:23:15.882 "num_base_bdevs_operational": 1, 00:23:15.882 "base_bdevs_list": [ 00:23:15.882 { 00:23:15.882 "name": null, 00:23:15.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.882 "is_configured": false, 00:23:15.882 "data_offset": 2048, 00:23:15.882 "data_size": 63488 00:23:15.882 }, 00:23:15.882 { 00:23:15.882 "name": "BaseBdev2", 00:23:15.882 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:15.882 "is_configured": true, 00:23:15.882 "data_offset": 2048, 00:23:15.882 "data_size": 63488 00:23:15.882 } 00:23:15.883 ] 00:23:15.883 }' 00:23:15.883 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:15.883 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.450 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:16.450 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:16.450 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:16.450 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:16.450 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:16.451 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.451 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.709 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:16.709 "name": "raid_bdev1", 00:23:16.709 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:16.709 "strip_size_kb": 0, 00:23:16.709 "state": "online", 00:23:16.709 "raid_level": "raid1", 00:23:16.709 "superblock": true, 00:23:16.709 "num_base_bdevs": 2, 00:23:16.709 "num_base_bdevs_discovered": 1, 00:23:16.709 "num_base_bdevs_operational": 1, 00:23:16.709 "base_bdevs_list": [ 00:23:16.709 { 00:23:16.709 "name": null, 00:23:16.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.709 "is_configured": false, 00:23:16.709 "data_offset": 2048, 00:23:16.709 "data_size": 63488 00:23:16.709 }, 00:23:16.709 { 00:23:16.709 "name": "BaseBdev2", 00:23:16.709 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:16.709 "is_configured": true, 00:23:16.709 "data_offset": 2048, 00:23:16.709 "data_size": 63488 00:23:16.709 } 00:23:16.709 ] 00:23:16.709 }' 00:23:16.709 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:16.709 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:16.709 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:16.709 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:16.709 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:16.967 [2024-07-25 13:23:27.382028] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:16.967 [2024-07-25 13:23:27.386724] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1232b40 00:23:16.967 [2024-07-25 13:23:27.388156] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:16.967 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:23:17.941 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.941 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:17.941 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:17.941 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:17.941 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:17.941 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.941 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.200 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:18.200 "name": "raid_bdev1", 00:23:18.200 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:18.200 "strip_size_kb": 0, 00:23:18.200 "state": "online", 00:23:18.200 "raid_level": "raid1", 00:23:18.200 "superblock": true, 00:23:18.200 "num_base_bdevs": 2, 00:23:18.200 "num_base_bdevs_discovered": 2, 00:23:18.200 "num_base_bdevs_operational": 2, 00:23:18.200 "process": { 00:23:18.200 "type": "rebuild", 00:23:18.200 "target": "spare", 00:23:18.200 "progress": { 00:23:18.200 "blocks": 24576, 00:23:18.200 "percent": 38 00:23:18.200 } 00:23:18.200 }, 00:23:18.200 "base_bdevs_list": [ 00:23:18.200 { 00:23:18.200 "name": "spare", 00:23:18.200 "uuid": "7ce422ab-08a0-54b8-bfba-101b4f9862de", 00:23:18.200 "is_configured": true, 00:23:18.200 "data_offset": 2048, 00:23:18.200 "data_size": 63488 00:23:18.200 }, 00:23:18.200 { 00:23:18.200 "name": "BaseBdev2", 00:23:18.200 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:18.200 "is_configured": true, 00:23:18.200 "data_offset": 2048, 00:23:18.200 "data_size": 63488 00:23:18.200 } 00:23:18.200 ] 00:23:18.200 }' 00:23:18.200 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:18.200 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.200 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:23:18.458 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=763 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.458 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.717 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:18.717 "name": "raid_bdev1", 00:23:18.717 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:18.717 "strip_size_kb": 0, 00:23:18.717 "state": "online", 00:23:18.717 "raid_level": "raid1", 00:23:18.717 "superblock": true, 00:23:18.717 "num_base_bdevs": 2, 00:23:18.717 "num_base_bdevs_discovered": 2, 00:23:18.717 "num_base_bdevs_operational": 2, 00:23:18.717 "process": { 00:23:18.717 "type": "rebuild", 00:23:18.717 "target": "spare", 00:23:18.717 "progress": { 00:23:18.717 "blocks": 30720, 00:23:18.717 "percent": 48 00:23:18.717 } 00:23:18.717 }, 00:23:18.717 "base_bdevs_list": [ 00:23:18.717 { 00:23:18.717 "name": "spare", 00:23:18.717 "uuid": "7ce422ab-08a0-54b8-bfba-101b4f9862de", 00:23:18.717 "is_configured": true, 00:23:18.717 "data_offset": 2048, 00:23:18.717 "data_size": 63488 00:23:18.717 }, 00:23:18.717 { 00:23:18.717 "name": "BaseBdev2", 00:23:18.717 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:18.717 "is_configured": true, 00:23:18.717 "data_offset": 2048, 00:23:18.717 "data_size": 63488 00:23:18.717 } 00:23:18.717 ] 00:23:18.717 }' 00:23:18.717 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:18.717 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.717 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:18.717 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.717 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:19.653 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:19.653 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:19.653 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:19.653 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:19.653 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:19.653 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:19.653 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.653 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.912 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:19.912 "name": "raid_bdev1", 00:23:19.912 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:19.912 "strip_size_kb": 0, 00:23:19.912 "state": "online", 00:23:19.912 "raid_level": "raid1", 00:23:19.912 "superblock": true, 00:23:19.912 "num_base_bdevs": 2, 00:23:19.912 "num_base_bdevs_discovered": 2, 00:23:19.912 "num_base_bdevs_operational": 2, 00:23:19.912 "process": { 00:23:19.912 "type": "rebuild", 00:23:19.912 "target": "spare", 00:23:19.912 "progress": { 00:23:19.912 "blocks": 57344, 00:23:19.912 "percent": 90 00:23:19.912 } 00:23:19.912 }, 00:23:19.912 "base_bdevs_list": [ 00:23:19.912 { 00:23:19.912 "name": "spare", 00:23:19.912 "uuid": "7ce422ab-08a0-54b8-bfba-101b4f9862de", 00:23:19.912 "is_configured": true, 00:23:19.912 "data_offset": 2048, 00:23:19.912 "data_size": 63488 00:23:19.912 }, 00:23:19.912 { 00:23:19.912 "name": "BaseBdev2", 00:23:19.912 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:19.912 "is_configured": true, 00:23:19.912 "data_offset": 2048, 00:23:19.912 "data_size": 63488 00:23:19.912 } 00:23:19.912 ] 00:23:19.912 }' 00:23:19.912 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:19.912 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:19.912 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:19.912 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:19.912 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:20.171 [2024-07-25 13:23:30.510603] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:20.171 [2024-07-25 13:23:30.510654] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:20.171 [2024-07-25 13:23:30.510726] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.107 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:21.107 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.107 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:21.107 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:21.107 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:21.107 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:21.107 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.107 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:21.366 "name": "raid_bdev1", 00:23:21.366 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:21.366 "strip_size_kb": 0, 00:23:21.366 "state": "online", 00:23:21.366 "raid_level": "raid1", 00:23:21.366 "superblock": true, 00:23:21.366 "num_base_bdevs": 2, 00:23:21.366 "num_base_bdevs_discovered": 2, 00:23:21.366 "num_base_bdevs_operational": 2, 00:23:21.366 "base_bdevs_list": [ 00:23:21.366 { 00:23:21.366 "name": "spare", 00:23:21.366 "uuid": "7ce422ab-08a0-54b8-bfba-101b4f9862de", 00:23:21.366 "is_configured": true, 00:23:21.366 "data_offset": 2048, 00:23:21.366 "data_size": 63488 00:23:21.366 }, 00:23:21.366 { 00:23:21.366 "name": "BaseBdev2", 00:23:21.366 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:21.366 "is_configured": true, 00:23:21.366 "data_offset": 2048, 00:23:21.366 "data_size": 63488 00:23:21.366 } 00:23:21.366 ] 00:23:21.366 }' 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.366 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.625 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:21.625 "name": "raid_bdev1", 00:23:21.625 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:21.625 "strip_size_kb": 0, 00:23:21.625 "state": "online", 00:23:21.625 "raid_level": "raid1", 00:23:21.625 "superblock": true, 00:23:21.625 "num_base_bdevs": 2, 00:23:21.625 "num_base_bdevs_discovered": 2, 00:23:21.625 "num_base_bdevs_operational": 2, 00:23:21.625 "base_bdevs_list": [ 00:23:21.625 { 00:23:21.625 "name": "spare", 00:23:21.625 "uuid": "7ce422ab-08a0-54b8-bfba-101b4f9862de", 00:23:21.625 "is_configured": true, 00:23:21.625 "data_offset": 2048, 00:23:21.625 "data_size": 63488 00:23:21.625 }, 00:23:21.625 { 00:23:21.625 "name": "BaseBdev2", 00:23:21.625 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:21.625 "is_configured": true, 00:23:21.625 "data_offset": 2048, 00:23:21.625 "data_size": 63488 00:23:21.625 } 00:23:21.625 ] 00:23:21.625 }' 00:23:21.625 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:21.625 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:21.625 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.625 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.884 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:21.884 "name": "raid_bdev1", 00:23:21.884 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:21.884 "strip_size_kb": 0, 00:23:21.884 "state": "online", 00:23:21.884 "raid_level": "raid1", 00:23:21.884 "superblock": true, 00:23:21.884 "num_base_bdevs": 2, 00:23:21.884 "num_base_bdevs_discovered": 2, 00:23:21.884 "num_base_bdevs_operational": 2, 00:23:21.884 "base_bdevs_list": [ 00:23:21.884 { 00:23:21.884 "name": "spare", 00:23:21.884 "uuid": "7ce422ab-08a0-54b8-bfba-101b4f9862de", 00:23:21.884 "is_configured": true, 00:23:21.884 "data_offset": 2048, 00:23:21.884 "data_size": 63488 00:23:21.884 }, 00:23:21.884 { 00:23:21.884 "name": "BaseBdev2", 00:23:21.884 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:21.884 "is_configured": true, 00:23:21.884 "data_offset": 2048, 00:23:21.884 "data_size": 63488 00:23:21.884 } 00:23:21.884 ] 00:23:21.884 }' 00:23:21.884 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:21.884 13:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.451 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:22.710 [2024-07-25 13:23:33.017630] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.710 [2024-07-25 13:23:33.017655] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:22.710 [2024-07-25 13:23:33.017705] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:22.710 [2024-07-25 13:23:33.017756] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:22.710 [2024-07-25 13:23:33.017767] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x152c290 name raid_bdev1, state offline 00:23:22.710 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.710 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:23:22.968 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:23:22.968 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:23:22.968 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:23:22.968 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:22.968 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:22.968 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:22.968 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:22.968 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:22.969 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:22.969 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:22.969 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:22.969 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:22.969 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:23.227 /dev/nbd0 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:23.227 1+0 records in 00:23:23.227 1+0 records out 00:23:23.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023068 s, 17.8 MB/s 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:23.227 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:23.485 /dev/nbd1 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:23.485 1+0 records in 00:23:23.485 1+0 records out 00:23:23.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304428 s, 13.5 MB/s 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:23.485 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:23.744 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:23.744 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:23.744 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:23.744 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:23.744 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:23.744 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:23.744 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:23.744 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:23.744 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:23.744 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:24.003 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:24.003 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:24.003 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:24.003 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:24.003 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:24.003 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:24.003 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:24.003 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:24.003 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:23:24.003 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:24.262 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:24.521 [2024-07-25 13:23:34.823426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:24.521 [2024-07-25 13:23:34.823466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.521 [2024-07-25 13:23:34.823485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x152d4a0 00:23:24.521 [2024-07-25 13:23:34.823498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.521 [2024-07-25 13:23:34.824998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.521 [2024-07-25 13:23:34.825026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:24.521 [2024-07-25 13:23:34.825095] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:24.521 [2024-07-25 13:23:34.825119] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:24.521 [2024-07-25 13:23:34.825216] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:24.521 spare 00:23:24.521 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:24.521 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:24.521 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:24.521 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:24.521 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:24.521 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:24.521 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:24.521 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:24.521 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:24.521 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:24.521 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.521 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.521 [2024-07-25 13:23:34.925522] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x152bb00 00:23:24.521 [2024-07-25 13:23:34.925535] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:24.521 [2024-07-25 13:23:34.925701] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x152c410 00:23:24.521 [2024-07-25 13:23:34.925831] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x152bb00 00:23:24.521 [2024-07-25 13:23:34.925841] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x152bb00 00:23:24.521 [2024-07-25 13:23:34.925933] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.780 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:24.780 "name": "raid_bdev1", 00:23:24.780 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:24.780 "strip_size_kb": 0, 00:23:24.780 "state": "online", 00:23:24.780 "raid_level": "raid1", 00:23:24.780 "superblock": true, 00:23:24.780 "num_base_bdevs": 2, 00:23:24.780 "num_base_bdevs_discovered": 2, 00:23:24.780 "num_base_bdevs_operational": 2, 00:23:24.780 "base_bdevs_list": [ 00:23:24.780 { 00:23:24.780 "name": "spare", 00:23:24.780 "uuid": "7ce422ab-08a0-54b8-bfba-101b4f9862de", 00:23:24.780 "is_configured": true, 00:23:24.780 "data_offset": 2048, 00:23:24.780 "data_size": 63488 00:23:24.780 }, 00:23:24.780 { 00:23:24.780 "name": "BaseBdev2", 00:23:24.780 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:24.780 "is_configured": true, 00:23:24.780 "data_offset": 2048, 00:23:24.780 "data_size": 63488 00:23:24.780 } 00:23:24.780 ] 00:23:24.780 }' 00:23:24.780 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:24.780 13:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.348 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:25.348 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:25.348 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:25.348 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:25.348 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:25.348 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.348 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.607 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:25.607 "name": "raid_bdev1", 00:23:25.607 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:25.607 "strip_size_kb": 0, 00:23:25.607 "state": "online", 00:23:25.607 "raid_level": "raid1", 00:23:25.607 "superblock": true, 00:23:25.607 "num_base_bdevs": 2, 00:23:25.607 "num_base_bdevs_discovered": 2, 00:23:25.607 "num_base_bdevs_operational": 2, 00:23:25.607 "base_bdevs_list": [ 00:23:25.607 { 00:23:25.607 "name": "spare", 00:23:25.607 "uuid": "7ce422ab-08a0-54b8-bfba-101b4f9862de", 00:23:25.607 "is_configured": true, 00:23:25.607 "data_offset": 2048, 00:23:25.607 "data_size": 63488 00:23:25.607 }, 00:23:25.607 { 00:23:25.607 "name": "BaseBdev2", 00:23:25.607 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:25.607 "is_configured": true, 00:23:25.607 "data_offset": 2048, 00:23:25.607 "data_size": 63488 00:23:25.607 } 00:23:25.607 ] 00:23:25.607 }' 00:23:25.607 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:25.607 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:25.607 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:25.607 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:25.607 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.607 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:25.866 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:23:25.866 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:26.125 [2024-07-25 13:23:36.407692] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:26.125 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:26.125 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:26.125 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:26.125 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:26.125 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:26.125 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:26.125 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:26.125 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:26.125 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:26.125 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:26.125 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.125 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.384 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.384 "name": "raid_bdev1", 00:23:26.384 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:26.384 "strip_size_kb": 0, 00:23:26.384 "state": "online", 00:23:26.384 "raid_level": "raid1", 00:23:26.384 "superblock": true, 00:23:26.384 "num_base_bdevs": 2, 00:23:26.384 "num_base_bdevs_discovered": 1, 00:23:26.384 "num_base_bdevs_operational": 1, 00:23:26.384 "base_bdevs_list": [ 00:23:26.384 { 00:23:26.384 "name": null, 00:23:26.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.384 "is_configured": false, 00:23:26.384 "data_offset": 2048, 00:23:26.384 "data_size": 63488 00:23:26.384 }, 00:23:26.384 { 00:23:26.384 "name": "BaseBdev2", 00:23:26.384 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:26.384 "is_configured": true, 00:23:26.384 "data_offset": 2048, 00:23:26.384 "data_size": 63488 00:23:26.384 } 00:23:26.384 ] 00:23:26.384 }' 00:23:26.384 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.384 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.952 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:27.210 [2024-07-25 13:23:37.442429] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:27.210 [2024-07-25 13:23:37.442556] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:27.210 [2024-07-25 13:23:37.442571] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:27.210 [2024-07-25 13:23:37.442598] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:27.210 [2024-07-25 13:23:37.447190] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1531e90 00:23:27.210 [2024-07-25 13:23:37.449352] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:27.210 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:23:28.146 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:28.146 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:28.146 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:28.146 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:28.146 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:28.146 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.146 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.405 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:28.405 "name": "raid_bdev1", 00:23:28.405 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:28.405 "strip_size_kb": 0, 00:23:28.405 "state": "online", 00:23:28.405 "raid_level": "raid1", 00:23:28.405 "superblock": true, 00:23:28.405 "num_base_bdevs": 2, 00:23:28.405 "num_base_bdevs_discovered": 2, 00:23:28.405 "num_base_bdevs_operational": 2, 00:23:28.405 "process": { 00:23:28.405 "type": "rebuild", 00:23:28.405 "target": "spare", 00:23:28.405 "progress": { 00:23:28.405 "blocks": 24576, 00:23:28.405 "percent": 38 00:23:28.405 } 00:23:28.405 }, 00:23:28.405 "base_bdevs_list": [ 00:23:28.405 { 00:23:28.405 "name": "spare", 00:23:28.405 "uuid": "7ce422ab-08a0-54b8-bfba-101b4f9862de", 00:23:28.405 "is_configured": true, 00:23:28.405 "data_offset": 2048, 00:23:28.405 "data_size": 63488 00:23:28.405 }, 00:23:28.405 { 00:23:28.405 "name": "BaseBdev2", 00:23:28.405 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:28.405 "is_configured": true, 00:23:28.405 "data_offset": 2048, 00:23:28.405 "data_size": 63488 00:23:28.405 } 00:23:28.405 ] 00:23:28.405 }' 00:23:28.405 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:28.405 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:28.405 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:28.405 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:28.405 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:28.665 [2024-07-25 13:23:38.983977] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:28.665 [2024-07-25 13:23:39.060991] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:28.665 [2024-07-25 13:23:39.061036] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.665 [2024-07-25 13:23:39.061050] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:28.665 [2024-07-25 13:23:39.061058] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:28.665 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:28.665 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:28.665 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:28.665 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:28.665 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:28.665 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:28.665 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:28.665 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:28.665 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:28.665 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:28.665 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.665 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.924 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:28.924 "name": "raid_bdev1", 00:23:28.924 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:28.924 "strip_size_kb": 0, 00:23:28.924 "state": "online", 00:23:28.924 "raid_level": "raid1", 00:23:28.924 "superblock": true, 00:23:28.924 "num_base_bdevs": 2, 00:23:28.924 "num_base_bdevs_discovered": 1, 00:23:28.924 "num_base_bdevs_operational": 1, 00:23:28.924 "base_bdevs_list": [ 00:23:28.924 { 00:23:28.924 "name": null, 00:23:28.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.924 "is_configured": false, 00:23:28.924 "data_offset": 2048, 00:23:28.924 "data_size": 63488 00:23:28.924 }, 00:23:28.924 { 00:23:28.924 "name": "BaseBdev2", 00:23:28.924 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:28.924 "is_configured": true, 00:23:28.924 "data_offset": 2048, 00:23:28.924 "data_size": 63488 00:23:28.924 } 00:23:28.924 ] 00:23:28.924 }' 00:23:28.924 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:28.924 13:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.492 13:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:29.751 [2024-07-25 13:23:40.083899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:29.751 [2024-07-25 13:23:40.083947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.751 [2024-07-25 13:23:40.083966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16cd2f0 00:23:29.751 [2024-07-25 13:23:40.083978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.751 [2024-07-25 13:23:40.084329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.751 [2024-07-25 13:23:40.084347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:29.751 [2024-07-25 13:23:40.084422] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:29.751 [2024-07-25 13:23:40.084433] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:29.751 [2024-07-25 13:23:40.084449] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:29.751 [2024-07-25 13:23:40.084467] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:29.751 [2024-07-25 13:23:40.089184] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x152e370 00:23:29.751 spare 00:23:29.751 [2024-07-25 13:23:40.090463] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:29.751 13:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:23:30.688 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:30.688 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:30.688 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:30.688 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:30.688 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:30.688 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.688 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.017 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:31.017 "name": "raid_bdev1", 00:23:31.017 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:31.017 "strip_size_kb": 0, 00:23:31.017 "state": "online", 00:23:31.017 "raid_level": "raid1", 00:23:31.017 "superblock": true, 00:23:31.017 "num_base_bdevs": 2, 00:23:31.017 "num_base_bdevs_discovered": 2, 00:23:31.017 "num_base_bdevs_operational": 2, 00:23:31.017 "process": { 00:23:31.017 "type": "rebuild", 00:23:31.017 "target": "spare", 00:23:31.017 "progress": { 00:23:31.017 "blocks": 24576, 00:23:31.017 "percent": 38 00:23:31.017 } 00:23:31.017 }, 00:23:31.017 "base_bdevs_list": [ 00:23:31.017 { 00:23:31.017 "name": "spare", 00:23:31.017 "uuid": "7ce422ab-08a0-54b8-bfba-101b4f9862de", 00:23:31.017 "is_configured": true, 00:23:31.017 "data_offset": 2048, 00:23:31.017 "data_size": 63488 00:23:31.017 }, 00:23:31.017 { 00:23:31.017 "name": "BaseBdev2", 00:23:31.017 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:31.017 "is_configured": true, 00:23:31.017 "data_offset": 2048, 00:23:31.017 "data_size": 63488 00:23:31.017 } 00:23:31.017 ] 00:23:31.017 }' 00:23:31.017 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:31.017 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:31.017 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:31.017 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.017 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:31.276 [2024-07-25 13:23:41.637508] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:31.276 [2024-07-25 13:23:41.702053] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:31.276 [2024-07-25 13:23:41.702095] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.276 [2024-07-25 13:23:41.702109] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:31.276 [2024-07-25 13:23:41.702116] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:31.276 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:31.276 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:31.276 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:31.276 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:31.276 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:31.276 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:31.276 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.276 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.276 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.276 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.276 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.276 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.535 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.535 "name": "raid_bdev1", 00:23:31.535 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:31.535 "strip_size_kb": 0, 00:23:31.535 "state": "online", 00:23:31.535 "raid_level": "raid1", 00:23:31.535 "superblock": true, 00:23:31.535 "num_base_bdevs": 2, 00:23:31.535 "num_base_bdevs_discovered": 1, 00:23:31.535 "num_base_bdevs_operational": 1, 00:23:31.535 "base_bdevs_list": [ 00:23:31.535 { 00:23:31.535 "name": null, 00:23:31.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.535 "is_configured": false, 00:23:31.535 "data_offset": 2048, 00:23:31.535 "data_size": 63488 00:23:31.535 }, 00:23:31.535 { 00:23:31.535 "name": "BaseBdev2", 00:23:31.535 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:31.535 "is_configured": true, 00:23:31.535 "data_offset": 2048, 00:23:31.535 "data_size": 63488 00:23:31.535 } 00:23:31.535 ] 00:23:31.535 }' 00:23:31.535 13:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.535 13:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.103 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:32.103 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:32.103 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:32.103 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:32.103 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:32.103 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.103 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.362 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:32.362 "name": "raid_bdev1", 00:23:32.362 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:32.362 "strip_size_kb": 0, 00:23:32.362 "state": "online", 00:23:32.362 "raid_level": "raid1", 00:23:32.362 "superblock": true, 00:23:32.362 "num_base_bdevs": 2, 00:23:32.362 "num_base_bdevs_discovered": 1, 00:23:32.362 "num_base_bdevs_operational": 1, 00:23:32.362 "base_bdevs_list": [ 00:23:32.362 { 00:23:32.362 "name": null, 00:23:32.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.362 "is_configured": false, 00:23:32.362 "data_offset": 2048, 00:23:32.362 "data_size": 63488 00:23:32.362 }, 00:23:32.362 { 00:23:32.362 "name": "BaseBdev2", 00:23:32.362 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:32.362 "is_configured": true, 00:23:32.362 "data_offset": 2048, 00:23:32.362 "data_size": 63488 00:23:32.362 } 00:23:32.362 ] 00:23:32.362 }' 00:23:32.362 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:32.362 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:32.362 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:32.362 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:32.362 13:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:32.621 13:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:32.880 [2024-07-25 13:23:43.282484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:32.880 [2024-07-25 13:23:43.282526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.880 [2024-07-25 13:23:43.282543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x152def0 00:23:32.880 [2024-07-25 13:23:43.282562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.880 [2024-07-25 13:23:43.282869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.880 [2024-07-25 13:23:43.282886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:32.880 [2024-07-25 13:23:43.282943] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:32.880 [2024-07-25 13:23:43.282955] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:32.880 [2024-07-25 13:23:43.282965] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:32.880 BaseBdev1 00:23:32.880 13:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:34.256 "name": "raid_bdev1", 00:23:34.256 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:34.256 "strip_size_kb": 0, 00:23:34.256 "state": "online", 00:23:34.256 "raid_level": "raid1", 00:23:34.256 "superblock": true, 00:23:34.256 "num_base_bdevs": 2, 00:23:34.256 "num_base_bdevs_discovered": 1, 00:23:34.256 "num_base_bdevs_operational": 1, 00:23:34.256 "base_bdevs_list": [ 00:23:34.256 { 00:23:34.256 "name": null, 00:23:34.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.256 "is_configured": false, 00:23:34.256 "data_offset": 2048, 00:23:34.256 "data_size": 63488 00:23:34.256 }, 00:23:34.256 { 00:23:34.256 "name": "BaseBdev2", 00:23:34.256 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:34.256 "is_configured": true, 00:23:34.256 "data_offset": 2048, 00:23:34.256 "data_size": 63488 00:23:34.256 } 00:23:34.256 ] 00:23:34.256 }' 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:34.256 13:23:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.823 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:34.823 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:34.823 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:34.823 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:34.823 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:34.823 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.823 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.081 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:35.081 "name": "raid_bdev1", 00:23:35.081 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:35.081 "strip_size_kb": 0, 00:23:35.081 "state": "online", 00:23:35.081 "raid_level": "raid1", 00:23:35.081 "superblock": true, 00:23:35.081 "num_base_bdevs": 2, 00:23:35.081 "num_base_bdevs_discovered": 1, 00:23:35.081 "num_base_bdevs_operational": 1, 00:23:35.081 "base_bdevs_list": [ 00:23:35.081 { 00:23:35.081 "name": null, 00:23:35.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.081 "is_configured": false, 00:23:35.081 "data_offset": 2048, 00:23:35.081 "data_size": 63488 00:23:35.081 }, 00:23:35.081 { 00:23:35.081 "name": "BaseBdev2", 00:23:35.081 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:35.081 "is_configured": true, 00:23:35.081 "data_offset": 2048, 00:23:35.081 "data_size": 63488 00:23:35.081 } 00:23:35.081 ] 00:23:35.081 }' 00:23:35.081 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:35.081 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:35.081 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:35.082 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:35.340 [2024-07-25 13:23:45.620659] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:35.340 [2024-07-25 13:23:45.620766] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:35.340 [2024-07-25 13:23:45.620780] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:35.340 request: 00:23:35.340 { 00:23:35.340 "base_bdev": "BaseBdev1", 00:23:35.340 "raid_bdev": "raid_bdev1", 00:23:35.340 "method": "bdev_raid_add_base_bdev", 00:23:35.340 "req_id": 1 00:23:35.340 } 00:23:35.340 Got JSON-RPC error response 00:23:35.340 response: 00:23:35.340 { 00:23:35.340 "code": -22, 00:23:35.340 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:35.340 } 00:23:35.340 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:23:35.340 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:35.340 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:35.340 13:23:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:35.340 13:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:23:36.275 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:36.275 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:36.275 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:36.275 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:36.275 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:36.275 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:36.275 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.275 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.275 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.275 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.275 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.275 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.534 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.534 "name": "raid_bdev1", 00:23:36.534 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:36.534 "strip_size_kb": 0, 00:23:36.534 "state": "online", 00:23:36.534 "raid_level": "raid1", 00:23:36.534 "superblock": true, 00:23:36.534 "num_base_bdevs": 2, 00:23:36.534 "num_base_bdevs_discovered": 1, 00:23:36.534 "num_base_bdevs_operational": 1, 00:23:36.534 "base_bdevs_list": [ 00:23:36.534 { 00:23:36.534 "name": null, 00:23:36.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.534 "is_configured": false, 00:23:36.534 "data_offset": 2048, 00:23:36.534 "data_size": 63488 00:23:36.534 }, 00:23:36.534 { 00:23:36.534 "name": "BaseBdev2", 00:23:36.534 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:36.534 "is_configured": true, 00:23:36.534 "data_offset": 2048, 00:23:36.534 "data_size": 63488 00:23:36.534 } 00:23:36.534 ] 00:23:36.534 }' 00:23:36.534 13:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.534 13:23:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.101 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:37.101 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:37.101 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:37.101 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:37.101 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:37.101 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.101 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.360 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:37.360 "name": "raid_bdev1", 00:23:37.360 "uuid": "e17d68c9-10af-4035-9072-a2566929e093", 00:23:37.360 "strip_size_kb": 0, 00:23:37.360 "state": "online", 00:23:37.360 "raid_level": "raid1", 00:23:37.360 "superblock": true, 00:23:37.360 "num_base_bdevs": 2, 00:23:37.360 "num_base_bdevs_discovered": 1, 00:23:37.360 "num_base_bdevs_operational": 1, 00:23:37.360 "base_bdevs_list": [ 00:23:37.360 { 00:23:37.360 "name": null, 00:23:37.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.360 "is_configured": false, 00:23:37.360 "data_offset": 2048, 00:23:37.360 "data_size": 63488 00:23:37.360 }, 00:23:37.360 { 00:23:37.360 "name": "BaseBdev2", 00:23:37.360 "uuid": "29597c12-b4a0-5471-bc97-ed2f25955ad2", 00:23:37.360 "is_configured": true, 00:23:37.360 "data_offset": 2048, 00:23:37.360 "data_size": 63488 00:23:37.360 } 00:23:37.360 ] 00:23:37.360 }' 00:23:37.360 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 959178 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 959178 ']' 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 959178 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 959178 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 959178' 00:23:37.361 killing process with pid 959178 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 959178 00:23:37.361 Received shutdown signal, test time was about 60.000000 seconds 00:23:37.361 00:23:37.361 Latency(us) 00:23:37.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.361 =================================================================================================================== 00:23:37.361 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:37.361 [2024-07-25 13:23:47.801480] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:37.361 [2024-07-25 13:23:47.801557] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:37.361 [2024-07-25 13:23:47.801595] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:37.361 [2024-07-25 13:23:47.801607] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x152bb00 name raid_bdev1, state offline 00:23:37.361 13:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 959178 00:23:37.361 [2024-07-25 13:23:47.824871] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:23:37.620 00:23:37.620 real 0m33.348s 00:23:37.620 user 0m49.190s 00:23:37.620 sys 0m5.924s 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.620 ************************************ 00:23:37.620 END TEST raid_rebuild_test_sb 00:23:37.620 ************************************ 00:23:37.620 13:23:48 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:23:37.620 13:23:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:23:37.620 13:23:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:37.620 13:23:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:37.620 ************************************ 00:23:37.620 START TEST raid_rebuild_test_io 00:23:37.620 ************************************ 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:37.620 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:37.879 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:23:37.879 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=965214 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 965214 /var/tmp/spdk-raid.sock 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 965214 ']' 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:37.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.880 13:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:37.880 [2024-07-25 13:23:48.166725] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:23:37.880 [2024-07-25 13:23:48.166784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid965214 ] 00:23:37.880 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:37.880 Zero copy mechanism will not be used. 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:37.880 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:37.880 [2024-07-25 13:23:48.297430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.139 [2024-07-25 13:23:48.383721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.139 [2024-07-25 13:23:48.438450] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:38.139 [2024-07-25 13:23:48.438481] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:38.707 13:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.707 13:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:23:38.707 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:38.707 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:38.966 BaseBdev1_malloc 00:23:38.966 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:39.226 [2024-07-25 13:23:49.506756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:39.226 [2024-07-25 13:23:49.506799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.226 [2024-07-25 13:23:49.506818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13a45f0 00:23:39.226 [2024-07-25 13:23:49.506829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.226 [2024-07-25 13:23:49.508344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.226 [2024-07-25 13:23:49.508372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:39.226 BaseBdev1 00:23:39.226 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:39.226 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:39.485 BaseBdev2_malloc 00:23:39.485 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:39.485 [2024-07-25 13:23:49.964307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:39.485 [2024-07-25 13:23:49.964349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.485 [2024-07-25 13:23:49.964365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1547fd0 00:23:39.485 [2024-07-25 13:23:49.964377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.485 [2024-07-25 13:23:49.965788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.485 [2024-07-25 13:23:49.965815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:39.485 BaseBdev2 00:23:39.744 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:39.744 spare_malloc 00:23:39.744 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:40.002 spare_delay 00:23:40.003 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:40.261 [2024-07-25 13:23:50.646442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:40.261 [2024-07-25 13:23:50.646481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.261 [2024-07-25 13:23:50.646498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x153c340 00:23:40.261 [2024-07-25 13:23:50.646510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.261 [2024-07-25 13:23:50.647916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.261 [2024-07-25 13:23:50.647943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:40.261 spare 00:23:40.261 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:40.520 [2024-07-25 13:23:50.871048] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:40.520 [2024-07-25 13:23:50.872223] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:40.520 [2024-07-25 13:23:50.872289] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x139c290 00:23:40.520 [2024-07-25 13:23:50.872299] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:40.520 [2024-07-25 13:23:50.872490] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x139ede0 00:23:40.520 [2024-07-25 13:23:50.872615] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x139c290 00:23:40.520 [2024-07-25 13:23:50.872624] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x139c290 00:23:40.520 [2024-07-25 13:23:50.872724] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.520 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:40.520 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:40.520 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:40.520 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:40.520 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:40.520 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:40.520 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:40.520 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:40.520 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:40.520 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:40.520 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.520 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.779 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:40.779 "name": "raid_bdev1", 00:23:40.779 "uuid": "1e8c9f36-e55a-43ba-aaa3-a89dfd075056", 00:23:40.779 "strip_size_kb": 0, 00:23:40.779 "state": "online", 00:23:40.779 "raid_level": "raid1", 00:23:40.779 "superblock": false, 00:23:40.779 "num_base_bdevs": 2, 00:23:40.779 "num_base_bdevs_discovered": 2, 00:23:40.779 "num_base_bdevs_operational": 2, 00:23:40.779 "base_bdevs_list": [ 00:23:40.779 { 00:23:40.779 "name": "BaseBdev1", 00:23:40.779 "uuid": "23f1561b-ce3e-5935-b133-fcbf95c3b038", 00:23:40.779 "is_configured": true, 00:23:40.779 "data_offset": 0, 00:23:40.779 "data_size": 65536 00:23:40.779 }, 00:23:40.779 { 00:23:40.779 "name": "BaseBdev2", 00:23:40.779 "uuid": "f3643395-93cf-590e-b597-86ed2eb92e58", 00:23:40.779 "is_configured": true, 00:23:40.779 "data_offset": 0, 00:23:40.779 "data_size": 65536 00:23:40.779 } 00:23:40.779 ] 00:23:40.779 }' 00:23:40.779 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:40.779 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.346 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:41.346 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:23:41.605 [2024-07-25 13:23:51.914001] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:41.605 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:23:41.605 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.605 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:41.864 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:23:41.864 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:23:41.864 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:41.864 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:41.864 [2024-07-25 13:23:52.264673] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x139b4a0 00:23:41.864 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:41.864 Zero copy mechanism will not be used. 00:23:41.864 Running I/O for 60 seconds... 00:23:42.123 [2024-07-25 13:23:52.369671] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:42.123 [2024-07-25 13:23:52.377184] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x139b4a0 00:23:42.123 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:42.123 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:42.123 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:42.123 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:42.123 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:42.123 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:42.123 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.123 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.123 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.123 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.123 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.123 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.382 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:42.382 "name": "raid_bdev1", 00:23:42.382 "uuid": "1e8c9f36-e55a-43ba-aaa3-a89dfd075056", 00:23:42.382 "strip_size_kb": 0, 00:23:42.382 "state": "online", 00:23:42.382 "raid_level": "raid1", 00:23:42.382 "superblock": false, 00:23:42.382 "num_base_bdevs": 2, 00:23:42.382 "num_base_bdevs_discovered": 1, 00:23:42.382 "num_base_bdevs_operational": 1, 00:23:42.382 "base_bdevs_list": [ 00:23:42.382 { 00:23:42.382 "name": null, 00:23:42.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.382 "is_configured": false, 00:23:42.382 "data_offset": 0, 00:23:42.382 "data_size": 65536 00:23:42.382 }, 00:23:42.382 { 00:23:42.382 "name": "BaseBdev2", 00:23:42.382 "uuid": "f3643395-93cf-590e-b597-86ed2eb92e58", 00:23:42.382 "is_configured": true, 00:23:42.382 "data_offset": 0, 00:23:42.382 "data_size": 65536 00:23:42.382 } 00:23:42.382 ] 00:23:42.382 }' 00:23:42.382 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:42.382 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:43.039 13:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:43.039 [2024-07-25 13:23:53.473270] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:43.298 13:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:43.298 [2024-07-25 13:23:53.546632] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13a0a90 00:23:43.298 [2024-07-25 13:23:53.548951] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:43.298 [2024-07-25 13:23:53.658780] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:43.298 [2024-07-25 13:23:53.659049] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:43.298 [2024-07-25 13:23:53.777610] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:43.298 [2024-07-25 13:23:53.777749] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:43.557 [2024-07-25 13:23:54.043736] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:43.557 [2024-07-25 13:23:54.044467] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:43.816 [2024-07-25 13:23:54.262942] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:43.816 [2024-07-25 13:23:54.263055] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:44.074 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.074 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:44.074 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:44.074 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:44.074 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:44.074 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.074 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.334 [2024-07-25 13:23:54.617799] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:44.334 [2024-07-25 13:23:54.618208] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:44.334 [2024-07-25 13:23:54.748657] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:44.334 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:44.334 "name": "raid_bdev1", 00:23:44.334 "uuid": "1e8c9f36-e55a-43ba-aaa3-a89dfd075056", 00:23:44.334 "strip_size_kb": 0, 00:23:44.334 "state": "online", 00:23:44.334 "raid_level": "raid1", 00:23:44.334 "superblock": false, 00:23:44.334 "num_base_bdevs": 2, 00:23:44.334 "num_base_bdevs_discovered": 2, 00:23:44.334 "num_base_bdevs_operational": 2, 00:23:44.334 "process": { 00:23:44.334 "type": "rebuild", 00:23:44.334 "target": "spare", 00:23:44.334 "progress": { 00:23:44.334 "blocks": 14336, 00:23:44.334 "percent": 21 00:23:44.334 } 00:23:44.334 }, 00:23:44.334 "base_bdevs_list": [ 00:23:44.334 { 00:23:44.334 "name": "spare", 00:23:44.334 "uuid": "e13a1c1f-e0ff-56a6-b7dd-4bf03f7d7749", 00:23:44.334 "is_configured": true, 00:23:44.334 "data_offset": 0, 00:23:44.334 "data_size": 65536 00:23:44.334 }, 00:23:44.334 { 00:23:44.334 "name": "BaseBdev2", 00:23:44.334 "uuid": "f3643395-93cf-590e-b597-86ed2eb92e58", 00:23:44.334 "is_configured": true, 00:23:44.334 "data_offset": 0, 00:23:44.334 "data_size": 65536 00:23:44.334 } 00:23:44.334 ] 00:23:44.334 }' 00:23:44.334 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:44.334 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.334 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:44.593 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.593 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:44.593 [2024-07-25 13:23:55.069666] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:44.852 [2024-07-25 13:23:55.113745] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:44.852 [2024-07-25 13:23:55.115230] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.852 [2024-07-25 13:23:55.115254] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:44.852 [2024-07-25 13:23:55.115263] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:44.852 [2024-07-25 13:23:55.137845] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x139b4a0 00:23:44.852 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:44.852 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:44.852 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:44.852 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:44.852 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:44.852 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:44.852 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:44.852 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:44.852 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:44.852 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:44.852 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.852 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.111 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.111 "name": "raid_bdev1", 00:23:45.111 "uuid": "1e8c9f36-e55a-43ba-aaa3-a89dfd075056", 00:23:45.111 "strip_size_kb": 0, 00:23:45.111 "state": "online", 00:23:45.111 "raid_level": "raid1", 00:23:45.111 "superblock": false, 00:23:45.111 "num_base_bdevs": 2, 00:23:45.111 "num_base_bdevs_discovered": 1, 00:23:45.111 "num_base_bdevs_operational": 1, 00:23:45.111 "base_bdevs_list": [ 00:23:45.111 { 00:23:45.111 "name": null, 00:23:45.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.111 "is_configured": false, 00:23:45.111 "data_offset": 0, 00:23:45.111 "data_size": 65536 00:23:45.111 }, 00:23:45.111 { 00:23:45.111 "name": "BaseBdev2", 00:23:45.111 "uuid": "f3643395-93cf-590e-b597-86ed2eb92e58", 00:23:45.111 "is_configured": true, 00:23:45.111 "data_offset": 0, 00:23:45.111 "data_size": 65536 00:23:45.111 } 00:23:45.111 ] 00:23:45.111 }' 00:23:45.111 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.111 13:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.679 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:45.679 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:45.679 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:45.679 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:45.679 13:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:45.679 13:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.679 13:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.939 13:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:45.939 "name": "raid_bdev1", 00:23:45.939 "uuid": "1e8c9f36-e55a-43ba-aaa3-a89dfd075056", 00:23:45.939 "strip_size_kb": 0, 00:23:45.939 "state": "online", 00:23:45.939 "raid_level": "raid1", 00:23:45.939 "superblock": false, 00:23:45.939 "num_base_bdevs": 2, 00:23:45.939 "num_base_bdevs_discovered": 1, 00:23:45.939 "num_base_bdevs_operational": 1, 00:23:45.939 "base_bdevs_list": [ 00:23:45.939 { 00:23:45.939 "name": null, 00:23:45.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.939 "is_configured": false, 00:23:45.939 "data_offset": 0, 00:23:45.939 "data_size": 65536 00:23:45.939 }, 00:23:45.939 { 00:23:45.939 "name": "BaseBdev2", 00:23:45.939 "uuid": "f3643395-93cf-590e-b597-86ed2eb92e58", 00:23:45.939 "is_configured": true, 00:23:45.939 "data_offset": 0, 00:23:45.939 "data_size": 65536 00:23:45.939 } 00:23:45.939 ] 00:23:45.939 }' 00:23:45.939 13:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:45.939 13:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:45.939 13:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:45.939 13:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:45.939 13:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:46.198 [2024-07-25 13:23:56.554155] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:46.198 13:23:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:23:46.198 [2024-07-25 13:23:56.613865] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1539a20 00:23:46.198 [2024-07-25 13:23:56.615226] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:46.457 [2024-07-25 13:23:56.724524] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:46.457 [2024-07-25 13:23:56.724777] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:46.457 [2024-07-25 13:23:56.843739] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:46.457 [2024-07-25 13:23:56.843857] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:47.026 [2024-07-25 13:23:57.295010] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:47.026 [2024-07-25 13:23:57.295234] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:47.285 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:47.285 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:47.285 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:47.285 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:47.285 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:47.285 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.285 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.285 [2024-07-25 13:23:57.641382] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:47.285 [2024-07-25 13:23:57.767348] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:47.544 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:47.544 "name": "raid_bdev1", 00:23:47.544 "uuid": "1e8c9f36-e55a-43ba-aaa3-a89dfd075056", 00:23:47.544 "strip_size_kb": 0, 00:23:47.544 "state": "online", 00:23:47.544 "raid_level": "raid1", 00:23:47.544 "superblock": false, 00:23:47.544 "num_base_bdevs": 2, 00:23:47.544 "num_base_bdevs_discovered": 2, 00:23:47.544 "num_base_bdevs_operational": 2, 00:23:47.544 "process": { 00:23:47.544 "type": "rebuild", 00:23:47.544 "target": "spare", 00:23:47.544 "progress": { 00:23:47.544 "blocks": 16384, 00:23:47.544 "percent": 25 00:23:47.544 } 00:23:47.544 }, 00:23:47.544 "base_bdevs_list": [ 00:23:47.544 { 00:23:47.544 "name": "spare", 00:23:47.544 "uuid": "e13a1c1f-e0ff-56a6-b7dd-4bf03f7d7749", 00:23:47.544 "is_configured": true, 00:23:47.544 "data_offset": 0, 00:23:47.544 "data_size": 65536 00:23:47.544 }, 00:23:47.544 { 00:23:47.544 "name": "BaseBdev2", 00:23:47.544 "uuid": "f3643395-93cf-590e-b597-86ed2eb92e58", 00:23:47.545 "is_configured": true, 00:23:47.545 "data_offset": 0, 00:23:47.545 "data_size": 65536 00:23:47.545 } 00:23:47.545 ] 00:23:47.545 }' 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=792 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.545 13:23:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.804 [2024-07-25 13:23:58.112107] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:47.804 13:23:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:47.804 "name": "raid_bdev1", 00:23:47.804 "uuid": "1e8c9f36-e55a-43ba-aaa3-a89dfd075056", 00:23:47.804 "strip_size_kb": 0, 00:23:47.804 "state": "online", 00:23:47.804 "raid_level": "raid1", 00:23:47.804 "superblock": false, 00:23:47.804 "num_base_bdevs": 2, 00:23:47.804 "num_base_bdevs_discovered": 2, 00:23:47.804 "num_base_bdevs_operational": 2, 00:23:47.804 "process": { 00:23:47.804 "type": "rebuild", 00:23:47.804 "target": "spare", 00:23:47.804 "progress": { 00:23:47.804 "blocks": 20480, 00:23:47.804 "percent": 31 00:23:47.804 } 00:23:47.804 }, 00:23:47.804 "base_bdevs_list": [ 00:23:47.804 { 00:23:47.804 "name": "spare", 00:23:47.804 "uuid": "e13a1c1f-e0ff-56a6-b7dd-4bf03f7d7749", 00:23:47.804 "is_configured": true, 00:23:47.804 "data_offset": 0, 00:23:47.804 "data_size": 65536 00:23:47.804 }, 00:23:47.804 { 00:23:47.804 "name": "BaseBdev2", 00:23:47.804 "uuid": "f3643395-93cf-590e-b597-86ed2eb92e58", 00:23:47.804 "is_configured": true, 00:23:47.804 "data_offset": 0, 00:23:47.804 "data_size": 65536 00:23:47.804 } 00:23:47.804 ] 00:23:47.804 }' 00:23:47.804 13:23:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:47.804 13:23:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:47.804 13:23:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:47.804 13:23:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.804 13:23:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:48.063 [2024-07-25 13:23:58.338350] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:48.632 [2024-07-25 13:23:59.011646] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:48.632 [2024-07-25 13:23:59.011956] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:48.891 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:48.891 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.891 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:48.891 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:48.891 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:48.891 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:48.891 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.891 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.150 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:49.150 "name": "raid_bdev1", 00:23:49.150 "uuid": "1e8c9f36-e55a-43ba-aaa3-a89dfd075056", 00:23:49.150 "strip_size_kb": 0, 00:23:49.150 "state": "online", 00:23:49.150 "raid_level": "raid1", 00:23:49.150 "superblock": false, 00:23:49.150 "num_base_bdevs": 2, 00:23:49.150 "num_base_bdevs_discovered": 2, 00:23:49.150 "num_base_bdevs_operational": 2, 00:23:49.150 "process": { 00:23:49.150 "type": "rebuild", 00:23:49.150 "target": "spare", 00:23:49.150 "progress": { 00:23:49.150 "blocks": 38912, 00:23:49.150 "percent": 59 00:23:49.150 } 00:23:49.150 }, 00:23:49.150 "base_bdevs_list": [ 00:23:49.150 { 00:23:49.150 "name": "spare", 00:23:49.150 "uuid": "e13a1c1f-e0ff-56a6-b7dd-4bf03f7d7749", 00:23:49.150 "is_configured": true, 00:23:49.150 "data_offset": 0, 00:23:49.150 "data_size": 65536 00:23:49.150 }, 00:23:49.150 { 00:23:49.150 "name": "BaseBdev2", 00:23:49.150 "uuid": "f3643395-93cf-590e-b597-86ed2eb92e58", 00:23:49.150 "is_configured": true, 00:23:49.150 "data_offset": 0, 00:23:49.150 "data_size": 65536 00:23:49.150 } 00:23:49.150 ] 00:23:49.150 }' 00:23:49.150 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:49.150 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.150 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:49.150 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.150 13:23:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:50.526 [2024-07-25 13:24:00.576770] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:23:50.526 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:50.526 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:50.526 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:50.527 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:50.527 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:50.527 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:50.527 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.527 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.527 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:50.527 "name": "raid_bdev1", 00:23:50.527 "uuid": "1e8c9f36-e55a-43ba-aaa3-a89dfd075056", 00:23:50.527 "strip_size_kb": 0, 00:23:50.527 "state": "online", 00:23:50.527 "raid_level": "raid1", 00:23:50.527 "superblock": false, 00:23:50.527 "num_base_bdevs": 2, 00:23:50.527 "num_base_bdevs_discovered": 2, 00:23:50.527 "num_base_bdevs_operational": 2, 00:23:50.527 "process": { 00:23:50.527 "type": "rebuild", 00:23:50.527 "target": "spare", 00:23:50.527 "progress": { 00:23:50.527 "blocks": 61440, 00:23:50.527 "percent": 93 00:23:50.527 } 00:23:50.527 }, 00:23:50.527 "base_bdevs_list": [ 00:23:50.527 { 00:23:50.527 "name": "spare", 00:23:50.527 "uuid": "e13a1c1f-e0ff-56a6-b7dd-4bf03f7d7749", 00:23:50.527 "is_configured": true, 00:23:50.527 "data_offset": 0, 00:23:50.527 "data_size": 65536 00:23:50.527 }, 00:23:50.527 { 00:23:50.527 "name": "BaseBdev2", 00:23:50.527 "uuid": "f3643395-93cf-590e-b597-86ed2eb92e58", 00:23:50.527 "is_configured": true, 00:23:50.527 "data_offset": 0, 00:23:50.527 "data_size": 65536 00:23:50.527 } 00:23:50.527 ] 00:23:50.527 }' 00:23:50.527 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:50.527 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:50.527 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:50.527 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:50.527 13:24:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:50.786 [2024-07-25 13:24:01.021166] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:50.786 [2024-07-25 13:24:01.128782] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:50.786 [2024-07-25 13:24:01.130290] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.725 13:24:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:51.725 13:24:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.725 13:24:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:51.725 13:24:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:51.725 13:24:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:51.725 13:24:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:51.725 13:24:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.725 13:24:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.725 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:51.725 "name": "raid_bdev1", 00:23:51.725 "uuid": "1e8c9f36-e55a-43ba-aaa3-a89dfd075056", 00:23:51.725 "strip_size_kb": 0, 00:23:51.725 "state": "online", 00:23:51.725 "raid_level": "raid1", 00:23:51.725 "superblock": false, 00:23:51.725 "num_base_bdevs": 2, 00:23:51.725 "num_base_bdevs_discovered": 2, 00:23:51.725 "num_base_bdevs_operational": 2, 00:23:51.725 "base_bdevs_list": [ 00:23:51.725 { 00:23:51.725 "name": "spare", 00:23:51.725 "uuid": "e13a1c1f-e0ff-56a6-b7dd-4bf03f7d7749", 00:23:51.725 "is_configured": true, 00:23:51.725 "data_offset": 0, 00:23:51.725 "data_size": 65536 00:23:51.725 }, 00:23:51.725 { 00:23:51.725 "name": "BaseBdev2", 00:23:51.725 "uuid": "f3643395-93cf-590e-b597-86ed2eb92e58", 00:23:51.725 "is_configured": true, 00:23:51.725 "data_offset": 0, 00:23:51.725 "data_size": 65536 00:23:51.725 } 00:23:51.725 ] 00:23:51.725 }' 00:23:51.725 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:51.725 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:51.725 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:51.983 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:51.983 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:23:51.983 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:51.983 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:51.983 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:51.983 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:51.983 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:51.983 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.983 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.242 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:52.242 "name": "raid_bdev1", 00:23:52.243 "uuid": "1e8c9f36-e55a-43ba-aaa3-a89dfd075056", 00:23:52.243 "strip_size_kb": 0, 00:23:52.243 "state": "online", 00:23:52.243 "raid_level": "raid1", 00:23:52.243 "superblock": false, 00:23:52.243 "num_base_bdevs": 2, 00:23:52.243 "num_base_bdevs_discovered": 2, 00:23:52.243 "num_base_bdevs_operational": 2, 00:23:52.243 "base_bdevs_list": [ 00:23:52.243 { 00:23:52.243 "name": "spare", 00:23:52.243 "uuid": "e13a1c1f-e0ff-56a6-b7dd-4bf03f7d7749", 00:23:52.243 "is_configured": true, 00:23:52.243 "data_offset": 0, 00:23:52.243 "data_size": 65536 00:23:52.243 }, 00:23:52.243 { 00:23:52.243 "name": "BaseBdev2", 00:23:52.243 "uuid": "f3643395-93cf-590e-b597-86ed2eb92e58", 00:23:52.243 "is_configured": true, 00:23:52.243 "data_offset": 0, 00:23:52.243 "data_size": 65536 00:23:52.243 } 00:23:52.243 ] 00:23:52.243 }' 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.243 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.502 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.502 "name": "raid_bdev1", 00:23:52.502 "uuid": "1e8c9f36-e55a-43ba-aaa3-a89dfd075056", 00:23:52.502 "strip_size_kb": 0, 00:23:52.502 "state": "online", 00:23:52.502 "raid_level": "raid1", 00:23:52.502 "superblock": false, 00:23:52.502 "num_base_bdevs": 2, 00:23:52.502 "num_base_bdevs_discovered": 2, 00:23:52.502 "num_base_bdevs_operational": 2, 00:23:52.502 "base_bdevs_list": [ 00:23:52.502 { 00:23:52.502 "name": "spare", 00:23:52.502 "uuid": "e13a1c1f-e0ff-56a6-b7dd-4bf03f7d7749", 00:23:52.502 "is_configured": true, 00:23:52.502 "data_offset": 0, 00:23:52.502 "data_size": 65536 00:23:52.502 }, 00:23:52.502 { 00:23:52.502 "name": "BaseBdev2", 00:23:52.502 "uuid": "f3643395-93cf-590e-b597-86ed2eb92e58", 00:23:52.502 "is_configured": true, 00:23:52.502 "data_offset": 0, 00:23:52.502 "data_size": 65536 00:23:52.502 } 00:23:52.502 ] 00:23:52.502 }' 00:23:52.502 13:24:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.502 13:24:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:53.069 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:53.328 [2024-07-25 13:24:03.592435] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:53.328 [2024-07-25 13:24:03.592463] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:53.328 00:23:53.328 Latency(us) 00:23:53.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.328 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:53.328 raid_bdev1 : 11.35 93.60 280.80 0.00 0.00 14550.70 271.97 115762.79 00:23:53.328 =================================================================================================================== 00:23:53.328 Total : 93.60 280.80 0.00 0.00 14550.70 271.97 115762.79 00:23:53.328 [2024-07-25 13:24:03.644218] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:53.328 [2024-07-25 13:24:03.644244] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:53.328 [2024-07-25 13:24:03.644308] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:53.328 [2024-07-25 13:24:03.644318] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x139c290 name raid_bdev1, state offline 00:23:53.328 0 00:23:53.328 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.328 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:53.586 13:24:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:53.845 /dev/nbd0 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:53.845 1+0 records in 00:23:53.845 1+0 records out 00:23:53.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217072 s, 18.9 MB/s 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:53.845 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:54.104 /dev/nbd1 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:54.104 1+0 records in 00:23:54.104 1+0 records out 00:23:54.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279287 s, 14.7 MB/s 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:54.104 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:54.363 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 965214 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 965214 ']' 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 965214 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.622 13:24:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 965214 00:23:54.622 13:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:54.622 13:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:54.622 13:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 965214' 00:23:54.622 killing process with pid 965214 00:23:54.622 13:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 965214 00:23:54.622 Received shutdown signal, test time was about 12.720947 seconds 00:23:54.622 00:23:54.622 Latency(us) 00:23:54.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.622 =================================================================================================================== 00:23:54.622 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.622 [2024-07-25 13:24:05.018742] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:54.622 13:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 965214 00:23:54.622 [2024-07-25 13:24:05.037166] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:54.881 13:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:23:54.881 00:23:54.881 real 0m17.130s 00:23:54.881 user 0m25.919s 00:23:54.881 sys 0m2.746s 00:23:54.881 13:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:54.881 13:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:54.881 ************************************ 00:23:54.881 END TEST raid_rebuild_test_io 00:23:54.881 ************************************ 00:23:54.881 13:24:05 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:23:54.881 13:24:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:23:54.881 13:24:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:54.881 13:24:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:54.881 ************************************ 00:23:54.881 START TEST raid_rebuild_test_sb_io 00:23:54.881 ************************************ 00:23:54.881 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:23:54.881 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:23:54.881 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=968729 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 968729 /var/tmp/spdk-raid.sock 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 968729 ']' 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:54.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.882 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.141 [2024-07-25 13:24:05.375968] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:23:55.141 [2024-07-25 13:24:05.376027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid968729 ] 00:23:55.141 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:55.141 Zero copy mechanism will not be used. 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:55.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:55.141 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:55.141 [2024-07-25 13:24:05.510319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.142 [2024-07-25 13:24:05.597694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.409 [2024-07-25 13:24:05.657096] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:55.409 [2024-07-25 13:24:05.657131] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:55.983 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.983 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:23:55.983 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:55.983 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:56.242 BaseBdev1_malloc 00:23:56.242 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:56.242 [2024-07-25 13:24:06.697651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:56.243 [2024-07-25 13:24:06.697695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.243 [2024-07-25 13:24:06.697714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22985f0 00:23:56.243 [2024-07-25 13:24:06.697726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.243 [2024-07-25 13:24:06.699235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.243 [2024-07-25 13:24:06.699263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:56.243 BaseBdev1 00:23:56.243 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:56.243 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:56.501 BaseBdev2_malloc 00:23:56.501 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:56.760 [2024-07-25 13:24:07.155377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:56.760 [2024-07-25 13:24:07.155416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.760 [2024-07-25 13:24:07.155433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x243bfd0 00:23:56.760 [2024-07-25 13:24:07.155444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.760 [2024-07-25 13:24:07.156861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.760 [2024-07-25 13:24:07.156887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:56.760 BaseBdev2 00:23:56.760 13:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:57.019 spare_malloc 00:23:57.019 13:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:57.278 spare_delay 00:23:57.278 13:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:57.537 [2024-07-25 13:24:07.825341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:57.537 [2024-07-25 13:24:07.825382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.537 [2024-07-25 13:24:07.825399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2430340 00:23:57.537 [2024-07-25 13:24:07.825411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.537 [2024-07-25 13:24:07.826803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.537 [2024-07-25 13:24:07.826829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:57.537 spare 00:23:57.537 13:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:57.796 [2024-07-25 13:24:08.049955] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:57.796 [2024-07-25 13:24:08.051114] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:57.796 [2024-07-25 13:24:08.051257] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2290290 00:23:57.796 [2024-07-25 13:24:08.051269] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:57.796 [2024-07-25 13:24:08.051450] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2292de0 00:23:57.796 [2024-07-25 13:24:08.051576] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2290290 00:23:57.796 [2024-07-25 13:24:08.051586] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2290290 00:23:57.796 [2024-07-25 13:24:08.051683] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.796 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:57.796 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:57.796 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:57.796 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:57.796 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:57.796 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:57.796 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:57.796 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:57.796 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:57.796 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:57.796 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.796 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.055 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.055 "name": "raid_bdev1", 00:23:58.055 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:23:58.055 "strip_size_kb": 0, 00:23:58.055 "state": "online", 00:23:58.055 "raid_level": "raid1", 00:23:58.055 "superblock": true, 00:23:58.055 "num_base_bdevs": 2, 00:23:58.055 "num_base_bdevs_discovered": 2, 00:23:58.055 "num_base_bdevs_operational": 2, 00:23:58.055 "base_bdevs_list": [ 00:23:58.055 { 00:23:58.055 "name": "BaseBdev1", 00:23:58.055 "uuid": "fa52c25f-a610-58cc-bae9-79377f865636", 00:23:58.055 "is_configured": true, 00:23:58.055 "data_offset": 2048, 00:23:58.055 "data_size": 63488 00:23:58.055 }, 00:23:58.055 { 00:23:58.055 "name": "BaseBdev2", 00:23:58.055 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:23:58.055 "is_configured": true, 00:23:58.055 "data_offset": 2048, 00:23:58.055 "data_size": 63488 00:23:58.055 } 00:23:58.055 ] 00:23:58.055 }' 00:23:58.055 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.055 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:58.622 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:58.622 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:23:58.622 [2024-07-25 13:24:09.064824] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:58.622 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:23:58.622 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.622 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:58.882 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:23:58.882 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:23:58.882 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:58.882 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:59.141 [2024-07-25 13:24:09.443582] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x228f640 00:23:59.141 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:59.141 Zero copy mechanism will not be used. 00:23:59.141 Running I/O for 60 seconds... 00:23:59.141 [2024-07-25 13:24:09.539831] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:59.141 [2024-07-25 13:24:09.540017] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x228f640 00:23:59.141 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:59.141 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:59.141 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:59.141 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:59.141 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:59.141 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:59.141 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:59.141 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:59.141 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:59.141 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:59.141 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.141 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.400 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:59.400 "name": "raid_bdev1", 00:23:59.401 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:23:59.401 "strip_size_kb": 0, 00:23:59.401 "state": "online", 00:23:59.401 "raid_level": "raid1", 00:23:59.401 "superblock": true, 00:23:59.401 "num_base_bdevs": 2, 00:23:59.401 "num_base_bdevs_discovered": 1, 00:23:59.401 "num_base_bdevs_operational": 1, 00:23:59.401 "base_bdevs_list": [ 00:23:59.401 { 00:23:59.401 "name": null, 00:23:59.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.401 "is_configured": false, 00:23:59.401 "data_offset": 2048, 00:23:59.401 "data_size": 63488 00:23:59.401 }, 00:23:59.401 { 00:23:59.401 "name": "BaseBdev2", 00:23:59.401 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:23:59.401 "is_configured": true, 00:23:59.401 "data_offset": 2048, 00:23:59.401 "data_size": 63488 00:23:59.401 } 00:23:59.401 ] 00:23:59.401 }' 00:23:59.401 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:59.401 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:59.970 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:00.263 [2024-07-25 13:24:10.572387] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:00.263 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:00.263 [2024-07-25 13:24:10.648386] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2293bb0 00:24:00.263 [2024-07-25 13:24:10.650541] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:00.522 [2024-07-25 13:24:10.759112] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:00.522 [2024-07-25 13:24:10.759399] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:00.522 [2024-07-25 13:24:10.968686] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:00.522 [2024-07-25 13:24:10.968814] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:01.089 [2024-07-25 13:24:11.311632] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:01.348 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.348 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:01.348 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:01.348 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:01.348 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:01.348 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.348 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.348 [2024-07-25 13:24:11.757269] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:01.348 [2024-07-25 13:24:11.757641] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:01.607 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:01.607 "name": "raid_bdev1", 00:24:01.607 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:01.607 "strip_size_kb": 0, 00:24:01.607 "state": "online", 00:24:01.607 "raid_level": "raid1", 00:24:01.607 "superblock": true, 00:24:01.607 "num_base_bdevs": 2, 00:24:01.607 "num_base_bdevs_discovered": 2, 00:24:01.607 "num_base_bdevs_operational": 2, 00:24:01.607 "process": { 00:24:01.607 "type": "rebuild", 00:24:01.607 "target": "spare", 00:24:01.607 "progress": { 00:24:01.607 "blocks": 14336, 00:24:01.607 "percent": 22 00:24:01.607 } 00:24:01.607 }, 00:24:01.607 "base_bdevs_list": [ 00:24:01.607 { 00:24:01.607 "name": "spare", 00:24:01.607 "uuid": "6cda2429-c440-5f3b-8d9e-9c3a64b17977", 00:24:01.607 "is_configured": true, 00:24:01.607 "data_offset": 2048, 00:24:01.607 "data_size": 63488 00:24:01.607 }, 00:24:01.607 { 00:24:01.607 "name": "BaseBdev2", 00:24:01.607 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:01.607 "is_configured": true, 00:24:01.607 "data_offset": 2048, 00:24:01.607 "data_size": 63488 00:24:01.607 } 00:24:01.607 ] 00:24:01.607 }' 00:24:01.607 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:01.607 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:01.607 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:01.607 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.607 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:01.607 [2024-07-25 13:24:11.966859] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:01.607 [2024-07-25 13:24:11.967065] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:01.866 [2024-07-25 13:24:12.161218] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:01.866 [2024-07-25 13:24:12.217356] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:01.866 [2024-07-25 13:24:12.226417] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:01.866 [2024-07-25 13:24:12.226442] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:01.866 [2024-07-25 13:24:12.226451] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:01.866 [2024-07-25 13:24:12.261514] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x228f640 00:24:01.866 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:01.866 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:01.866 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:01.866 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:01.866 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:01.866 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:01.866 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:01.866 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:01.866 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:01.866 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:01.866 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.866 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.126 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:02.126 "name": "raid_bdev1", 00:24:02.126 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:02.126 "strip_size_kb": 0, 00:24:02.126 "state": "online", 00:24:02.126 "raid_level": "raid1", 00:24:02.126 "superblock": true, 00:24:02.126 "num_base_bdevs": 2, 00:24:02.126 "num_base_bdevs_discovered": 1, 00:24:02.126 "num_base_bdevs_operational": 1, 00:24:02.126 "base_bdevs_list": [ 00:24:02.126 { 00:24:02.126 "name": null, 00:24:02.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.126 "is_configured": false, 00:24:02.126 "data_offset": 2048, 00:24:02.126 "data_size": 63488 00:24:02.126 }, 00:24:02.126 { 00:24:02.126 "name": "BaseBdev2", 00:24:02.126 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:02.126 "is_configured": true, 00:24:02.126 "data_offset": 2048, 00:24:02.126 "data_size": 63488 00:24:02.126 } 00:24:02.126 ] 00:24:02.126 }' 00:24:02.126 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:02.126 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:02.693 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:02.693 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:02.693 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:02.693 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:02.693 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:02.693 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.693 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.951 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:02.951 "name": "raid_bdev1", 00:24:02.951 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:02.951 "strip_size_kb": 0, 00:24:02.951 "state": "online", 00:24:02.951 "raid_level": "raid1", 00:24:02.951 "superblock": true, 00:24:02.951 "num_base_bdevs": 2, 00:24:02.951 "num_base_bdevs_discovered": 1, 00:24:02.951 "num_base_bdevs_operational": 1, 00:24:02.951 "base_bdevs_list": [ 00:24:02.951 { 00:24:02.951 "name": null, 00:24:02.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.951 "is_configured": false, 00:24:02.951 "data_offset": 2048, 00:24:02.951 "data_size": 63488 00:24:02.951 }, 00:24:02.951 { 00:24:02.951 "name": "BaseBdev2", 00:24:02.951 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:02.951 "is_configured": true, 00:24:02.951 "data_offset": 2048, 00:24:02.951 "data_size": 63488 00:24:02.951 } 00:24:02.951 ] 00:24:02.951 }' 00:24:02.951 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:02.951 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:02.951 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:03.208 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:03.208 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:03.208 [2024-07-25 13:24:13.652926] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:03.464 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:24:03.464 [2024-07-25 13:24:13.714355] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2296f60 00:24:03.464 [2024-07-25 13:24:13.715706] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:03.464 [2024-07-25 13:24:13.831934] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:03.464 [2024-07-25 13:24:13.832176] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:03.721 [2024-07-25 13:24:14.049975] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:03.721 [2024-07-25 13:24:14.050136] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:03.979 [2024-07-25 13:24:14.385325] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:04.237 [2024-07-25 13:24:14.602306] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:04.237 [2024-07-25 13:24:14.602430] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:04.237 13:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.237 13:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:04.237 13:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:04.237 13:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:04.237 13:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:04.237 13:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.237 13:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.495 13:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:04.495 "name": "raid_bdev1", 00:24:04.495 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:04.495 "strip_size_kb": 0, 00:24:04.495 "state": "online", 00:24:04.495 "raid_level": "raid1", 00:24:04.495 "superblock": true, 00:24:04.495 "num_base_bdevs": 2, 00:24:04.495 "num_base_bdevs_discovered": 2, 00:24:04.495 "num_base_bdevs_operational": 2, 00:24:04.495 "process": { 00:24:04.495 "type": "rebuild", 00:24:04.495 "target": "spare", 00:24:04.495 "progress": { 00:24:04.495 "blocks": 14336, 00:24:04.495 "percent": 22 00:24:04.495 } 00:24:04.495 }, 00:24:04.495 "base_bdevs_list": [ 00:24:04.495 { 00:24:04.495 "name": "spare", 00:24:04.495 "uuid": "6cda2429-c440-5f3b-8d9e-9c3a64b17977", 00:24:04.495 "is_configured": true, 00:24:04.495 "data_offset": 2048, 00:24:04.495 "data_size": 63488 00:24:04.495 }, 00:24:04.495 { 00:24:04.495 "name": "BaseBdev2", 00:24:04.495 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:04.495 "is_configured": true, 00:24:04.495 "data_offset": 2048, 00:24:04.495 "data_size": 63488 00:24:04.495 } 00:24:04.495 ] 00:24:04.495 }' 00:24:04.495 13:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:04.754 13:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.754 13:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:04.754 [2024-07-25 13:24:14.986613] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:24:04.754 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=810 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.754 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.029 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:05.029 "name": "raid_bdev1", 00:24:05.029 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:05.029 "strip_size_kb": 0, 00:24:05.029 "state": "online", 00:24:05.029 "raid_level": "raid1", 00:24:05.029 "superblock": true, 00:24:05.029 "num_base_bdevs": 2, 00:24:05.029 "num_base_bdevs_discovered": 2, 00:24:05.029 "num_base_bdevs_operational": 2, 00:24:05.029 "process": { 00:24:05.029 "type": "rebuild", 00:24:05.029 "target": "spare", 00:24:05.029 "progress": { 00:24:05.029 "blocks": 18432, 00:24:05.029 "percent": 29 00:24:05.029 } 00:24:05.029 }, 00:24:05.029 "base_bdevs_list": [ 00:24:05.029 { 00:24:05.029 "name": "spare", 00:24:05.029 "uuid": "6cda2429-c440-5f3b-8d9e-9c3a64b17977", 00:24:05.029 "is_configured": true, 00:24:05.029 "data_offset": 2048, 00:24:05.029 "data_size": 63488 00:24:05.029 }, 00:24:05.029 { 00:24:05.029 "name": "BaseBdev2", 00:24:05.029 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:05.029 "is_configured": true, 00:24:05.029 "data_offset": 2048, 00:24:05.029 "data_size": 63488 00:24:05.029 } 00:24:05.029 ] 00:24:05.029 }' 00:24:05.029 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:05.029 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.029 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:05.029 [2024-07-25 13:24:15.338260] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:05.029 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.029 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:05.029 [2024-07-25 13:24:15.456381] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:05.029 [2024-07-25 13:24:15.456519] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:05.597 [2024-07-25 13:24:15.776309] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:05.597 [2024-07-25 13:24:15.909103] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:05.597 [2024-07-25 13:24:15.909229] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:05.856 [2024-07-25 13:24:16.146472] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:06.115 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:06.115 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.115 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:06.115 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:06.115 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:06.115 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:06.115 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.115 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.115 [2024-07-25 13:24:16.497294] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:06.115 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:06.115 "name": "raid_bdev1", 00:24:06.115 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:06.115 "strip_size_kb": 0, 00:24:06.115 "state": "online", 00:24:06.115 "raid_level": "raid1", 00:24:06.115 "superblock": true, 00:24:06.115 "num_base_bdevs": 2, 00:24:06.115 "num_base_bdevs_discovered": 2, 00:24:06.115 "num_base_bdevs_operational": 2, 00:24:06.115 "process": { 00:24:06.115 "type": "rebuild", 00:24:06.115 "target": "spare", 00:24:06.115 "progress": { 00:24:06.115 "blocks": 38912, 00:24:06.115 "percent": 61 00:24:06.115 } 00:24:06.115 }, 00:24:06.115 "base_bdevs_list": [ 00:24:06.115 { 00:24:06.115 "name": "spare", 00:24:06.115 "uuid": "6cda2429-c440-5f3b-8d9e-9c3a64b17977", 00:24:06.115 "is_configured": true, 00:24:06.115 "data_offset": 2048, 00:24:06.115 "data_size": 63488 00:24:06.115 }, 00:24:06.115 { 00:24:06.115 "name": "BaseBdev2", 00:24:06.115 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:06.115 "is_configured": true, 00:24:06.115 "data_offset": 2048, 00:24:06.115 "data_size": 63488 00:24:06.115 } 00:24:06.115 ] 00:24:06.115 }' 00:24:06.115 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:06.374 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.374 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:06.374 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.374 13:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:06.374 [2024-07-25 13:24:16.707921] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:06.633 [2024-07-25 13:24:17.059152] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:06.892 [2024-07-25 13:24:17.262503] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:07.151 [2024-07-25 13:24:17.597532] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:07.410 13:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:07.410 13:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.410 13:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:07.410 13:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:07.410 13:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:07.410 13:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:07.410 13:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.410 13:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.669 13:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:07.669 "name": "raid_bdev1", 00:24:07.669 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:07.669 "strip_size_kb": 0, 00:24:07.669 "state": "online", 00:24:07.669 "raid_level": "raid1", 00:24:07.669 "superblock": true, 00:24:07.669 "num_base_bdevs": 2, 00:24:07.669 "num_base_bdevs_discovered": 2, 00:24:07.669 "num_base_bdevs_operational": 2, 00:24:07.669 "process": { 00:24:07.669 "type": "rebuild", 00:24:07.669 "target": "spare", 00:24:07.669 "progress": { 00:24:07.669 "blocks": 55296, 00:24:07.669 "percent": 87 00:24:07.669 } 00:24:07.669 }, 00:24:07.669 "base_bdevs_list": [ 00:24:07.669 { 00:24:07.669 "name": "spare", 00:24:07.669 "uuid": "6cda2429-c440-5f3b-8d9e-9c3a64b17977", 00:24:07.669 "is_configured": true, 00:24:07.669 "data_offset": 2048, 00:24:07.669 "data_size": 63488 00:24:07.669 }, 00:24:07.669 { 00:24:07.669 "name": "BaseBdev2", 00:24:07.669 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:07.669 "is_configured": true, 00:24:07.669 "data_offset": 2048, 00:24:07.669 "data_size": 63488 00:24:07.669 } 00:24:07.669 ] 00:24:07.669 }' 00:24:07.669 13:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:07.669 13:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.669 [2024-07-25 13:24:17.934330] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:24:07.669 13:24:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:07.669 13:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.669 13:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:07.928 [2024-07-25 13:24:18.272191] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:07.928 [2024-07-25 13:24:18.372424] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:07.928 [2024-07-25 13:24:18.373359] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.864 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:08.864 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.864 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:08.864 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:08.864 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:08.864 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:08.864 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.864 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.123 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:09.123 "name": "raid_bdev1", 00:24:09.123 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:09.123 "strip_size_kb": 0, 00:24:09.123 "state": "online", 00:24:09.123 "raid_level": "raid1", 00:24:09.123 "superblock": true, 00:24:09.123 "num_base_bdevs": 2, 00:24:09.123 "num_base_bdevs_discovered": 2, 00:24:09.123 "num_base_bdevs_operational": 2, 00:24:09.123 "base_bdevs_list": [ 00:24:09.123 { 00:24:09.123 "name": "spare", 00:24:09.123 "uuid": "6cda2429-c440-5f3b-8d9e-9c3a64b17977", 00:24:09.123 "is_configured": true, 00:24:09.123 "data_offset": 2048, 00:24:09.123 "data_size": 63488 00:24:09.123 }, 00:24:09.123 { 00:24:09.123 "name": "BaseBdev2", 00:24:09.123 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:09.123 "is_configured": true, 00:24:09.123 "data_offset": 2048, 00:24:09.123 "data_size": 63488 00:24:09.123 } 00:24:09.123 ] 00:24:09.123 }' 00:24:09.123 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:09.123 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:09.123 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:09.382 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:09.382 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:24:09.382 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:09.382 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:09.382 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:09.382 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:09.382 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:09.382 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.382 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.382 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:09.382 "name": "raid_bdev1", 00:24:09.382 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:09.382 "strip_size_kb": 0, 00:24:09.382 "state": "online", 00:24:09.382 "raid_level": "raid1", 00:24:09.382 "superblock": true, 00:24:09.382 "num_base_bdevs": 2, 00:24:09.382 "num_base_bdevs_discovered": 2, 00:24:09.382 "num_base_bdevs_operational": 2, 00:24:09.382 "base_bdevs_list": [ 00:24:09.382 { 00:24:09.382 "name": "spare", 00:24:09.382 "uuid": "6cda2429-c440-5f3b-8d9e-9c3a64b17977", 00:24:09.382 "is_configured": true, 00:24:09.382 "data_offset": 2048, 00:24:09.382 "data_size": 63488 00:24:09.382 }, 00:24:09.382 { 00:24:09.382 "name": "BaseBdev2", 00:24:09.382 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:09.382 "is_configured": true, 00:24:09.382 "data_offset": 2048, 00:24:09.382 "data_size": 63488 00:24:09.382 } 00:24:09.382 ] 00:24:09.382 }' 00:24:09.382 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.641 13:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.900 13:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.900 "name": "raid_bdev1", 00:24:09.900 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:09.900 "strip_size_kb": 0, 00:24:09.900 "state": "online", 00:24:09.900 "raid_level": "raid1", 00:24:09.900 "superblock": true, 00:24:09.900 "num_base_bdevs": 2, 00:24:09.900 "num_base_bdevs_discovered": 2, 00:24:09.900 "num_base_bdevs_operational": 2, 00:24:09.900 "base_bdevs_list": [ 00:24:09.900 { 00:24:09.900 "name": "spare", 00:24:09.900 "uuid": "6cda2429-c440-5f3b-8d9e-9c3a64b17977", 00:24:09.900 "is_configured": true, 00:24:09.900 "data_offset": 2048, 00:24:09.900 "data_size": 63488 00:24:09.900 }, 00:24:09.900 { 00:24:09.900 "name": "BaseBdev2", 00:24:09.900 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:09.900 "is_configured": true, 00:24:09.900 "data_offset": 2048, 00:24:09.900 "data_size": 63488 00:24:09.900 } 00:24:09.900 ] 00:24:09.900 }' 00:24:09.900 13:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.900 13:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:10.466 13:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:10.724 [2024-07-25 13:24:20.957218] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:10.724 [2024-07-25 13:24:20.957253] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:10.724 00:24:10.724 Latency(us) 00:24:10.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.724 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:10.724 raid_bdev1 : 11.53 95.04 285.13 0.00 0.00 13748.41 270.34 116601.65 00:24:10.724 =================================================================================================================== 00:24:10.724 Total : 95.04 285.13 0.00 0.00 13748.41 270.34 116601.65 00:24:10.724 [2024-07-25 13:24:21.008987] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.724 [2024-07-25 13:24:21.009013] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:10.724 [2024-07-25 13:24:21.009079] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:10.724 [2024-07-25 13:24:21.009090] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2290290 name raid_bdev1, state offline 00:24:10.724 0 00:24:10.724 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.724 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:10.982 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:11.241 /dev/nbd0 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:11.241 1+0 records in 00:24:11.241 1+0 records out 00:24:11.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282341 s, 14.5 MB/s 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:24:11.241 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:11.242 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:24:11.501 /dev/nbd1 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:11.501 1+0 records in 00:24:11.501 1+0 records out 00:24:11.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237999 s, 17.2 MB/s 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:11.501 13:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:11.760 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:12.019 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:12.019 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:12.019 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:12.019 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:12.019 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:12.019 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:12.019 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:12.019 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:12.019 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:24:12.019 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:12.278 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:12.537 [2024-07-25 13:24:22.800032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:12.537 [2024-07-25 13:24:22.800075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.537 [2024-07-25 13:24:22.800092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2292e30 00:24:12.537 [2024-07-25 13:24:22.800104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.537 [2024-07-25 13:24:22.801872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.537 [2024-07-25 13:24:22.801899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:12.537 [2024-07-25 13:24:22.801970] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:12.537 [2024-07-25 13:24:22.801996] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:12.537 [2024-07-25 13:24:22.802087] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:12.537 spare 00:24:12.537 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:12.537 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:12.537 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:12.537 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:12.537 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:12.537 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:12.537 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:12.537 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:12.537 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:12.537 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:12.537 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.537 13:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.537 [2024-07-25 13:24:22.902403] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x22979c0 00:24:12.537 [2024-07-25 13:24:22.902417] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:12.537 [2024-07-25 13:24:22.902586] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x242e210 00:24:12.537 [2024-07-25 13:24:22.902720] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22979c0 00:24:12.537 [2024-07-25 13:24:22.902729] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22979c0 00:24:12.537 [2024-07-25 13:24:22.902829] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:12.797 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.797 "name": "raid_bdev1", 00:24:12.797 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:12.797 "strip_size_kb": 0, 00:24:12.797 "state": "online", 00:24:12.797 "raid_level": "raid1", 00:24:12.797 "superblock": true, 00:24:12.797 "num_base_bdevs": 2, 00:24:12.797 "num_base_bdevs_discovered": 2, 00:24:12.797 "num_base_bdevs_operational": 2, 00:24:12.797 "base_bdevs_list": [ 00:24:12.797 { 00:24:12.797 "name": "spare", 00:24:12.797 "uuid": "6cda2429-c440-5f3b-8d9e-9c3a64b17977", 00:24:12.797 "is_configured": true, 00:24:12.797 "data_offset": 2048, 00:24:12.797 "data_size": 63488 00:24:12.797 }, 00:24:12.797 { 00:24:12.797 "name": "BaseBdev2", 00:24:12.797 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:12.797 "is_configured": true, 00:24:12.797 "data_offset": 2048, 00:24:12.797 "data_size": 63488 00:24:12.797 } 00:24:12.797 ] 00:24:12.797 }' 00:24:12.797 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.797 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:13.412 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:13.412 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:13.412 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:13.412 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:13.412 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:13.412 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.412 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.412 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:13.412 "name": "raid_bdev1", 00:24:13.412 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:13.412 "strip_size_kb": 0, 00:24:13.412 "state": "online", 00:24:13.412 "raid_level": "raid1", 00:24:13.412 "superblock": true, 00:24:13.412 "num_base_bdevs": 2, 00:24:13.412 "num_base_bdevs_discovered": 2, 00:24:13.412 "num_base_bdevs_operational": 2, 00:24:13.412 "base_bdevs_list": [ 00:24:13.412 { 00:24:13.412 "name": "spare", 00:24:13.412 "uuid": "6cda2429-c440-5f3b-8d9e-9c3a64b17977", 00:24:13.412 "is_configured": true, 00:24:13.412 "data_offset": 2048, 00:24:13.412 "data_size": 63488 00:24:13.412 }, 00:24:13.412 { 00:24:13.412 "name": "BaseBdev2", 00:24:13.412 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:13.412 "is_configured": true, 00:24:13.412 "data_offset": 2048, 00:24:13.412 "data_size": 63488 00:24:13.412 } 00:24:13.412 ] 00:24:13.412 }' 00:24:13.412 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:13.412 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:13.412 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:13.671 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:13.671 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:13.671 13:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.671 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.671 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:13.929 [2024-07-25 13:24:24.340373] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:13.929 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:13.929 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:13.929 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:13.929 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:13.929 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:13.929 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:13.929 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:13.929 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:13.929 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:13.929 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:13.929 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.929 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.188 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.188 "name": "raid_bdev1", 00:24:14.188 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:14.188 "strip_size_kb": 0, 00:24:14.188 "state": "online", 00:24:14.188 "raid_level": "raid1", 00:24:14.188 "superblock": true, 00:24:14.188 "num_base_bdevs": 2, 00:24:14.188 "num_base_bdevs_discovered": 1, 00:24:14.188 "num_base_bdevs_operational": 1, 00:24:14.188 "base_bdevs_list": [ 00:24:14.188 { 00:24:14.188 "name": null, 00:24:14.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.188 "is_configured": false, 00:24:14.188 "data_offset": 2048, 00:24:14.188 "data_size": 63488 00:24:14.188 }, 00:24:14.188 { 00:24:14.188 "name": "BaseBdev2", 00:24:14.188 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:14.188 "is_configured": true, 00:24:14.188 "data_offset": 2048, 00:24:14.188 "data_size": 63488 00:24:14.188 } 00:24:14.188 ] 00:24:14.188 }' 00:24:14.188 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.188 13:24:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:14.755 13:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:15.013 [2024-07-25 13:24:25.363194] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:15.013 [2024-07-25 13:24:25.363326] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:15.013 [2024-07-25 13:24:25.363341] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:15.013 [2024-07-25 13:24:25.363368] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:15.013 [2024-07-25 13:24:25.368428] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x242e210 00:24:15.013 [2024-07-25 13:24:25.370472] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:15.013 13:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:24:15.949 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.949 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:15.949 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:15.950 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:15.950 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:15.950 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.950 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.208 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:16.208 "name": "raid_bdev1", 00:24:16.208 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:16.208 "strip_size_kb": 0, 00:24:16.208 "state": "online", 00:24:16.208 "raid_level": "raid1", 00:24:16.208 "superblock": true, 00:24:16.208 "num_base_bdevs": 2, 00:24:16.208 "num_base_bdevs_discovered": 2, 00:24:16.208 "num_base_bdevs_operational": 2, 00:24:16.208 "process": { 00:24:16.208 "type": "rebuild", 00:24:16.208 "target": "spare", 00:24:16.208 "progress": { 00:24:16.208 "blocks": 22528, 00:24:16.208 "percent": 35 00:24:16.208 } 00:24:16.208 }, 00:24:16.208 "base_bdevs_list": [ 00:24:16.208 { 00:24:16.208 "name": "spare", 00:24:16.209 "uuid": "6cda2429-c440-5f3b-8d9e-9c3a64b17977", 00:24:16.209 "is_configured": true, 00:24:16.209 "data_offset": 2048, 00:24:16.209 "data_size": 63488 00:24:16.209 }, 00:24:16.209 { 00:24:16.209 "name": "BaseBdev2", 00:24:16.209 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:16.209 "is_configured": true, 00:24:16.209 "data_offset": 2048, 00:24:16.209 "data_size": 63488 00:24:16.209 } 00:24:16.209 ] 00:24:16.209 }' 00:24:16.209 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:16.209 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.209 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:16.209 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.209 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:16.467 [2024-07-25 13:24:26.846555] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:16.467 [2024-07-25 13:24:26.881544] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:16.467 [2024-07-25 13:24:26.881592] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.467 [2024-07-25 13:24:26.881606] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:16.467 [2024-07-25 13:24:26.881614] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:16.467 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:16.467 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:16.467 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:16.467 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:16.467 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:16.467 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:16.467 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.467 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.467 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.467 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.467 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.467 13:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.726 13:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.726 "name": "raid_bdev1", 00:24:16.726 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:16.726 "strip_size_kb": 0, 00:24:16.726 "state": "online", 00:24:16.726 "raid_level": "raid1", 00:24:16.726 "superblock": true, 00:24:16.726 "num_base_bdevs": 2, 00:24:16.726 "num_base_bdevs_discovered": 1, 00:24:16.726 "num_base_bdevs_operational": 1, 00:24:16.726 "base_bdevs_list": [ 00:24:16.726 { 00:24:16.726 "name": null, 00:24:16.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.726 "is_configured": false, 00:24:16.726 "data_offset": 2048, 00:24:16.726 "data_size": 63488 00:24:16.726 }, 00:24:16.726 { 00:24:16.726 "name": "BaseBdev2", 00:24:16.726 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:16.726 "is_configured": true, 00:24:16.726 "data_offset": 2048, 00:24:16.726 "data_size": 63488 00:24:16.726 } 00:24:16.726 ] 00:24:16.726 }' 00:24:16.726 13:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.726 13:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 13:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:17.552 [2024-07-25 13:24:27.928858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:17.552 [2024-07-25 13:24:27.928903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:17.552 [2024-07-25 13:24:27.928921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2293060 00:24:17.553 [2024-07-25 13:24:27.928933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:17.553 [2024-07-25 13:24:27.929271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:17.553 [2024-07-25 13:24:27.929287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:17.553 [2024-07-25 13:24:27.929379] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:17.553 [2024-07-25 13:24:27.929392] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:17.553 [2024-07-25 13:24:27.929403] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:17.553 [2024-07-25 13:24:27.929421] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:17.553 [2024-07-25 13:24:27.934474] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x244abb0 00:24:17.553 spare 00:24:17.553 [2024-07-25 13:24:27.935840] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:17.553 13:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:24:18.490 13:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.490 13:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:18.490 13:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:18.490 13:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:18.490 13:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:18.490 13:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.490 13:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.748 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:18.748 "name": "raid_bdev1", 00:24:18.748 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:18.748 "strip_size_kb": 0, 00:24:18.748 "state": "online", 00:24:18.748 "raid_level": "raid1", 00:24:18.748 "superblock": true, 00:24:18.748 "num_base_bdevs": 2, 00:24:18.748 "num_base_bdevs_discovered": 2, 00:24:18.748 "num_base_bdevs_operational": 2, 00:24:18.748 "process": { 00:24:18.748 "type": "rebuild", 00:24:18.748 "target": "spare", 00:24:18.748 "progress": { 00:24:18.748 "blocks": 24576, 00:24:18.748 "percent": 38 00:24:18.748 } 00:24:18.748 }, 00:24:18.748 "base_bdevs_list": [ 00:24:18.748 { 00:24:18.748 "name": "spare", 00:24:18.748 "uuid": "6cda2429-c440-5f3b-8d9e-9c3a64b17977", 00:24:18.748 "is_configured": true, 00:24:18.748 "data_offset": 2048, 00:24:18.748 "data_size": 63488 00:24:18.748 }, 00:24:18.748 { 00:24:18.748 "name": "BaseBdev2", 00:24:18.748 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:18.748 "is_configured": true, 00:24:18.748 "data_offset": 2048, 00:24:18.748 "data_size": 63488 00:24:18.748 } 00:24:18.748 ] 00:24:18.748 }' 00:24:18.748 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:18.749 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:19.007 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:19.007 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:19.007 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:19.007 [2024-07-25 13:24:29.491425] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:19.267 [2024-07-25 13:24:29.547546] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:19.267 [2024-07-25 13:24:29.547587] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.267 [2024-07-25 13:24:29.547601] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:19.267 [2024-07-25 13:24:29.547608] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:19.267 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:19.267 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:19.267 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:19.267 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:19.267 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:19.267 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:19.267 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:19.267 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:19.267 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:19.267 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:19.267 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.267 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.526 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:19.526 "name": "raid_bdev1", 00:24:19.527 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:19.527 "strip_size_kb": 0, 00:24:19.527 "state": "online", 00:24:19.527 "raid_level": "raid1", 00:24:19.527 "superblock": true, 00:24:19.527 "num_base_bdevs": 2, 00:24:19.527 "num_base_bdevs_discovered": 1, 00:24:19.527 "num_base_bdevs_operational": 1, 00:24:19.527 "base_bdevs_list": [ 00:24:19.527 { 00:24:19.527 "name": null, 00:24:19.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.527 "is_configured": false, 00:24:19.527 "data_offset": 2048, 00:24:19.527 "data_size": 63488 00:24:19.527 }, 00:24:19.527 { 00:24:19.527 "name": "BaseBdev2", 00:24:19.527 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:19.527 "is_configured": true, 00:24:19.527 "data_offset": 2048, 00:24:19.527 "data_size": 63488 00:24:19.527 } 00:24:19.527 ] 00:24:19.527 }' 00:24:19.527 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:19.527 13:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:20.094 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:20.094 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:20.094 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:20.094 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:20.094 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:20.094 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.094 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.353 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:20.353 "name": "raid_bdev1", 00:24:20.353 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:20.353 "strip_size_kb": 0, 00:24:20.353 "state": "online", 00:24:20.353 "raid_level": "raid1", 00:24:20.353 "superblock": true, 00:24:20.353 "num_base_bdevs": 2, 00:24:20.353 "num_base_bdevs_discovered": 1, 00:24:20.353 "num_base_bdevs_operational": 1, 00:24:20.353 "base_bdevs_list": [ 00:24:20.353 { 00:24:20.353 "name": null, 00:24:20.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.353 "is_configured": false, 00:24:20.353 "data_offset": 2048, 00:24:20.353 "data_size": 63488 00:24:20.353 }, 00:24:20.353 { 00:24:20.353 "name": "BaseBdev2", 00:24:20.353 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:20.353 "is_configured": true, 00:24:20.353 "data_offset": 2048, 00:24:20.353 "data_size": 63488 00:24:20.353 } 00:24:20.353 ] 00:24:20.353 }' 00:24:20.353 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:20.353 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:20.353 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:20.353 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:20.353 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:20.611 13:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:20.869 [2024-07-25 13:24:31.148542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:20.869 [2024-07-25 13:24:31.148583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.869 [2024-07-25 13:24:31.148600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22904a0 00:24:20.869 [2024-07-25 13:24:31.148611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.869 [2024-07-25 13:24:31.148913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.869 [2024-07-25 13:24:31.148928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:20.870 [2024-07-25 13:24:31.148984] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:20.870 [2024-07-25 13:24:31.148994] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:20.870 [2024-07-25 13:24:31.149004] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:20.870 BaseBdev1 00:24:20.870 13:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:24:21.807 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:21.807 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:21.807 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:21.807 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:21.807 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:21.807 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:21.807 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:21.807 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:21.807 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:21.807 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:21.807 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.807 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.066 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:22.066 "name": "raid_bdev1", 00:24:22.066 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:22.066 "strip_size_kb": 0, 00:24:22.066 "state": "online", 00:24:22.066 "raid_level": "raid1", 00:24:22.066 "superblock": true, 00:24:22.066 "num_base_bdevs": 2, 00:24:22.066 "num_base_bdevs_discovered": 1, 00:24:22.066 "num_base_bdevs_operational": 1, 00:24:22.066 "base_bdevs_list": [ 00:24:22.066 { 00:24:22.066 "name": null, 00:24:22.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.066 "is_configured": false, 00:24:22.066 "data_offset": 2048, 00:24:22.066 "data_size": 63488 00:24:22.066 }, 00:24:22.066 { 00:24:22.066 "name": "BaseBdev2", 00:24:22.066 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:22.066 "is_configured": true, 00:24:22.066 "data_offset": 2048, 00:24:22.066 "data_size": 63488 00:24:22.066 } 00:24:22.066 ] 00:24:22.066 }' 00:24:22.066 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:22.066 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:22.633 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:22.633 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:22.633 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:22.633 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:22.633 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:22.633 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.633 13:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:22.892 "name": "raid_bdev1", 00:24:22.892 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:22.892 "strip_size_kb": 0, 00:24:22.892 "state": "online", 00:24:22.892 "raid_level": "raid1", 00:24:22.892 "superblock": true, 00:24:22.892 "num_base_bdevs": 2, 00:24:22.892 "num_base_bdevs_discovered": 1, 00:24:22.892 "num_base_bdevs_operational": 1, 00:24:22.892 "base_bdevs_list": [ 00:24:22.892 { 00:24:22.892 "name": null, 00:24:22.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.892 "is_configured": false, 00:24:22.892 "data_offset": 2048, 00:24:22.892 "data_size": 63488 00:24:22.892 }, 00:24:22.892 { 00:24:22.892 "name": "BaseBdev2", 00:24:22.892 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:22.892 "is_configured": true, 00:24:22.892 "data_offset": 2048, 00:24:22.892 "data_size": 63488 00:24:22.892 } 00:24:22.892 ] 00:24:22.892 }' 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:22.892 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:23.150 [2024-07-25 13:24:33.507158] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:23.150 [2024-07-25 13:24:33.507270] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:23.150 [2024-07-25 13:24:33.507284] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:23.150 request: 00:24:23.150 { 00:24:23.150 "base_bdev": "BaseBdev1", 00:24:23.150 "raid_bdev": "raid_bdev1", 00:24:23.150 "method": "bdev_raid_add_base_bdev", 00:24:23.150 "req_id": 1 00:24:23.150 } 00:24:23.150 Got JSON-RPC error response 00:24:23.150 response: 00:24:23.150 { 00:24:23.150 "code": -22, 00:24:23.150 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:23.150 } 00:24:23.150 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:24:23.150 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:23.150 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:23.150 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:23.150 13:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:24:24.087 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:24.087 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:24.087 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:24.087 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:24.087 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:24.087 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:24.087 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:24.087 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:24.087 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:24.087 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:24.087 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.087 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.345 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:24.345 "name": "raid_bdev1", 00:24:24.345 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:24.345 "strip_size_kb": 0, 00:24:24.345 "state": "online", 00:24:24.345 "raid_level": "raid1", 00:24:24.345 "superblock": true, 00:24:24.345 "num_base_bdevs": 2, 00:24:24.345 "num_base_bdevs_discovered": 1, 00:24:24.345 "num_base_bdevs_operational": 1, 00:24:24.345 "base_bdevs_list": [ 00:24:24.345 { 00:24:24.345 "name": null, 00:24:24.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.345 "is_configured": false, 00:24:24.345 "data_offset": 2048, 00:24:24.345 "data_size": 63488 00:24:24.345 }, 00:24:24.345 { 00:24:24.345 "name": "BaseBdev2", 00:24:24.345 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:24.345 "is_configured": true, 00:24:24.345 "data_offset": 2048, 00:24:24.345 "data_size": 63488 00:24:24.345 } 00:24:24.345 ] 00:24:24.345 }' 00:24:24.345 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:24.345 13:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:24.914 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:24.914 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:24.914 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:24.914 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:24.914 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:24.914 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.914 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.173 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:25.173 "name": "raid_bdev1", 00:24:25.173 "uuid": "33f10461-4bbb-4a34-88a9-fc969e8d3c2b", 00:24:25.173 "strip_size_kb": 0, 00:24:25.173 "state": "online", 00:24:25.173 "raid_level": "raid1", 00:24:25.173 "superblock": true, 00:24:25.173 "num_base_bdevs": 2, 00:24:25.173 "num_base_bdevs_discovered": 1, 00:24:25.173 "num_base_bdevs_operational": 1, 00:24:25.173 "base_bdevs_list": [ 00:24:25.173 { 00:24:25.173 "name": null, 00:24:25.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.173 "is_configured": false, 00:24:25.173 "data_offset": 2048, 00:24:25.173 "data_size": 63488 00:24:25.173 }, 00:24:25.173 { 00:24:25.173 "name": "BaseBdev2", 00:24:25.173 "uuid": "cc9bcbf8-c079-5021-a137-5a459d768de0", 00:24:25.173 "is_configured": true, 00:24:25.173 "data_offset": 2048, 00:24:25.173 "data_size": 63488 00:24:25.173 } 00:24:25.173 ] 00:24:25.173 }' 00:24:25.173 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:25.173 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:25.173 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:25.173 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:25.173 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 968729 00:24:25.173 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 968729 ']' 00:24:25.173 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 968729 00:24:25.173 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:24:25.173 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:25.173 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 968729 00:24:25.432 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:25.432 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:25.432 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 968729' 00:24:25.432 killing process with pid 968729 00:24:25.432 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 968729 00:24:25.432 Received shutdown signal, test time was about 26.192561 seconds 00:24:25.432 00:24:25.432 Latency(us) 00:24:25.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.432 =================================================================================================================== 00:24:25.432 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.432 [2024-07-25 13:24:35.702096] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:25.432 [2024-07-25 13:24:35.702184] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:25.432 [2024-07-25 13:24:35.702225] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:25.432 [2024-07-25 13:24:35.702236] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22979c0 name raid_bdev1, state offline 00:24:25.432 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 968729 00:24:25.432 [2024-07-25 13:24:35.720912] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:25.432 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:24:25.432 00:24:25.432 real 0m30.607s 00:24:25.432 user 0m47.583s 00:24:25.432 sys 0m4.452s 00:24:25.432 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:25.432 13:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:25.432 ************************************ 00:24:25.432 END TEST raid_rebuild_test_sb_io 00:24:25.432 ************************************ 00:24:25.691 13:24:35 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:24:25.691 13:24:35 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:24:25.691 13:24:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:25.691 13:24:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:25.691 13:24:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:25.691 ************************************ 00:24:25.691 START TEST raid_rebuild_test 00:24:25.691 ************************************ 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=974379 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 974379 /var/tmp/spdk-raid.sock 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 974379 ']' 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:25.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.691 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.691 [2024-07-25 13:24:36.071956] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:24:25.692 [2024-07-25 13:24:36.072015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974379 ] 00:24:25.692 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:25.692 Zero copy mechanism will not be used. 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:25.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.692 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:25.950 [2024-07-25 13:24:36.205122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.951 [2024-07-25 13:24:36.291850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.951 [2024-07-25 13:24:36.350303] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:25.951 [2024-07-25 13:24:36.350354] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:26.517 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.517 13:24:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:24:26.517 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:26.517 13:24:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:26.776 BaseBdev1_malloc 00:24:26.776 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:27.034 [2024-07-25 13:24:37.406354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:27.034 [2024-07-25 13:24:37.406394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.034 [2024-07-25 13:24:37.406413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15715f0 00:24:27.034 [2024-07-25 13:24:37.406425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.034 [2024-07-25 13:24:37.407894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.034 [2024-07-25 13:24:37.407920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:27.034 BaseBdev1 00:24:27.034 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:27.034 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:27.293 BaseBdev2_malloc 00:24:27.293 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:27.552 [2024-07-25 13:24:37.868116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:27.552 [2024-07-25 13:24:37.868162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.552 [2024-07-25 13:24:37.868179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1714fd0 00:24:27.552 [2024-07-25 13:24:37.868191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.552 [2024-07-25 13:24:37.869599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.552 [2024-07-25 13:24:37.869625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:27.552 BaseBdev2 00:24:27.552 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:27.552 13:24:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:27.810 BaseBdev3_malloc 00:24:27.810 13:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:28.068 [2024-07-25 13:24:38.329660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:28.068 [2024-07-25 13:24:38.329701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:28.068 [2024-07-25 13:24:38.329718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x170ada0 00:24:28.068 [2024-07-25 13:24:38.329729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:28.068 [2024-07-25 13:24:38.331077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:28.068 [2024-07-25 13:24:38.331104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:28.068 BaseBdev3 00:24:28.068 13:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:28.068 13:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:28.068 BaseBdev4_malloc 00:24:28.068 13:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:28.361 [2024-07-25 13:24:38.742911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:28.361 [2024-07-25 13:24:38.742954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:28.361 [2024-07-25 13:24:38.742971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1569290 00:24:28.361 [2024-07-25 13:24:38.742982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:28.361 [2024-07-25 13:24:38.744363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:28.361 [2024-07-25 13:24:38.744389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:28.361 BaseBdev4 00:24:28.361 13:24:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:28.928 spare_malloc 00:24:28.928 13:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:29.187 spare_delay 00:24:29.187 13:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:29.754 [2024-07-25 13:24:39.982481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:29.754 [2024-07-25 13:24:39.982522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.754 [2024-07-25 13:24:39.982541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x156beb0 00:24:29.754 [2024-07-25 13:24:39.982552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.754 [2024-07-25 13:24:39.983972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.754 [2024-07-25 13:24:39.983999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:29.754 spare 00:24:29.754 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:29.754 [2024-07-25 13:24:40.223130] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:29.754 [2024-07-25 13:24:40.224367] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:29.754 [2024-07-25 13:24:40.224418] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:29.754 [2024-07-25 13:24:40.224460] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:29.754 [2024-07-25 13:24:40.224534] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1568e80 00:24:29.754 [2024-07-25 13:24:40.224543] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:29.754 [2024-07-25 13:24:40.224745] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1568d90 00:24:29.754 [2024-07-25 13:24:40.224883] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1568e80 00:24:29.754 [2024-07-25 13:24:40.224892] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1568e80 00:24:29.754 [2024-07-25 13:24:40.224999] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:29.754 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:29.754 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:29.754 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:29.754 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:29.754 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:30.013 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:30.013 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:30.013 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:30.013 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:30.013 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:30.013 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.013 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.013 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:30.013 "name": "raid_bdev1", 00:24:30.013 "uuid": "c9a3d059-fb11-4645-a07c-be2ef16e88dc", 00:24:30.013 "strip_size_kb": 0, 00:24:30.013 "state": "online", 00:24:30.013 "raid_level": "raid1", 00:24:30.013 "superblock": false, 00:24:30.013 "num_base_bdevs": 4, 00:24:30.013 "num_base_bdevs_discovered": 4, 00:24:30.013 "num_base_bdevs_operational": 4, 00:24:30.013 "base_bdevs_list": [ 00:24:30.013 { 00:24:30.013 "name": "BaseBdev1", 00:24:30.013 "uuid": "4e0c2fd3-8d24-5db2-805e-f19adfa2fcaf", 00:24:30.013 "is_configured": true, 00:24:30.013 "data_offset": 0, 00:24:30.013 "data_size": 65536 00:24:30.013 }, 00:24:30.013 { 00:24:30.013 "name": "BaseBdev2", 00:24:30.013 "uuid": "e951caff-8108-5eed-a6dd-e967c367e22a", 00:24:30.013 "is_configured": true, 00:24:30.013 "data_offset": 0, 00:24:30.013 "data_size": 65536 00:24:30.013 }, 00:24:30.013 { 00:24:30.013 "name": "BaseBdev3", 00:24:30.013 "uuid": "b10d6676-b707-5a3f-94c6-bad0560e0afa", 00:24:30.013 "is_configured": true, 00:24:30.013 "data_offset": 0, 00:24:30.013 "data_size": 65536 00:24:30.013 }, 00:24:30.013 { 00:24:30.013 "name": "BaseBdev4", 00:24:30.013 "uuid": "602ac775-85ad-54ad-a685-6a0b81597355", 00:24:30.013 "is_configured": true, 00:24:30.013 "data_offset": 0, 00:24:30.013 "data_size": 65536 00:24:30.013 } 00:24:30.013 ] 00:24:30.013 }' 00:24:30.013 13:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:30.013 13:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:30.580 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:30.580 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:24:30.839 [2024-07-25 13:24:41.258108] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:30.839 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:24:30.839 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.839 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:31.098 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:31.357 [2024-07-25 13:24:41.715061] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1568d90 00:24:31.357 /dev/nbd0 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:31.357 1+0 records in 00:24:31.357 1+0 records out 00:24:31.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253227 s, 16.2 MB/s 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:24:31.357 13:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:39.474 65536+0 records in 00:24:39.474 65536+0 records out 00:24:39.474 33554432 bytes (34 MB, 32 MiB) copied, 7.2329 s, 4.6 MB/s 00:24:39.474 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:39.474 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:39.474 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:39.474 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:39.474 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:39.474 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:39.474 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:39.474 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:39.474 [2024-07-25 13:24:49.260348] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:39.474 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:39.474 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:39.475 [2024-07-25 13:24:49.749679] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.475 13:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.734 13:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.734 "name": "raid_bdev1", 00:24:39.734 "uuid": "c9a3d059-fb11-4645-a07c-be2ef16e88dc", 00:24:39.734 "strip_size_kb": 0, 00:24:39.734 "state": "online", 00:24:39.734 "raid_level": "raid1", 00:24:39.734 "superblock": false, 00:24:39.734 "num_base_bdevs": 4, 00:24:39.734 "num_base_bdevs_discovered": 3, 00:24:39.734 "num_base_bdevs_operational": 3, 00:24:39.734 "base_bdevs_list": [ 00:24:39.734 { 00:24:39.734 "name": null, 00:24:39.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.734 "is_configured": false, 00:24:39.734 "data_offset": 0, 00:24:39.734 "data_size": 65536 00:24:39.734 }, 00:24:39.734 { 00:24:39.734 "name": "BaseBdev2", 00:24:39.734 "uuid": "e951caff-8108-5eed-a6dd-e967c367e22a", 00:24:39.734 "is_configured": true, 00:24:39.734 "data_offset": 0, 00:24:39.734 "data_size": 65536 00:24:39.734 }, 00:24:39.734 { 00:24:39.734 "name": "BaseBdev3", 00:24:39.734 "uuid": "b10d6676-b707-5a3f-94c6-bad0560e0afa", 00:24:39.734 "is_configured": true, 00:24:39.734 "data_offset": 0, 00:24:39.734 "data_size": 65536 00:24:39.734 }, 00:24:39.734 { 00:24:39.734 "name": "BaseBdev4", 00:24:39.734 "uuid": "602ac775-85ad-54ad-a685-6a0b81597355", 00:24:39.734 "is_configured": true, 00:24:39.734 "data_offset": 0, 00:24:39.734 "data_size": 65536 00:24:39.734 } 00:24:39.734 ] 00:24:39.734 }' 00:24:39.734 13:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.734 13:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.300 13:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:40.300 [2024-07-25 13:24:50.736289] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:40.300 [2024-07-25 13:24:50.740155] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17095d0 00:24:40.300 [2024-07-25 13:24:50.742219] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:40.300 13:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:41.677 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.677 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:41.677 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:41.677 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:41.677 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:41.677 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.677 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.677 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:41.677 "name": "raid_bdev1", 00:24:41.677 "uuid": "c9a3d059-fb11-4645-a07c-be2ef16e88dc", 00:24:41.678 "strip_size_kb": 0, 00:24:41.678 "state": "online", 00:24:41.678 "raid_level": "raid1", 00:24:41.678 "superblock": false, 00:24:41.678 "num_base_bdevs": 4, 00:24:41.678 "num_base_bdevs_discovered": 4, 00:24:41.678 "num_base_bdevs_operational": 4, 00:24:41.678 "process": { 00:24:41.678 "type": "rebuild", 00:24:41.678 "target": "spare", 00:24:41.678 "progress": { 00:24:41.678 "blocks": 24576, 00:24:41.678 "percent": 37 00:24:41.678 } 00:24:41.678 }, 00:24:41.678 "base_bdevs_list": [ 00:24:41.678 { 00:24:41.678 "name": "spare", 00:24:41.678 "uuid": "d5b17be4-af59-5fd7-ac97-659c1c9360f7", 00:24:41.678 "is_configured": true, 00:24:41.678 "data_offset": 0, 00:24:41.678 "data_size": 65536 00:24:41.678 }, 00:24:41.678 { 00:24:41.678 "name": "BaseBdev2", 00:24:41.678 "uuid": "e951caff-8108-5eed-a6dd-e967c367e22a", 00:24:41.678 "is_configured": true, 00:24:41.678 "data_offset": 0, 00:24:41.678 "data_size": 65536 00:24:41.678 }, 00:24:41.678 { 00:24:41.678 "name": "BaseBdev3", 00:24:41.678 "uuid": "b10d6676-b707-5a3f-94c6-bad0560e0afa", 00:24:41.678 "is_configured": true, 00:24:41.678 "data_offset": 0, 00:24:41.678 "data_size": 65536 00:24:41.678 }, 00:24:41.678 { 00:24:41.678 "name": "BaseBdev4", 00:24:41.678 "uuid": "602ac775-85ad-54ad-a685-6a0b81597355", 00:24:41.678 "is_configured": true, 00:24:41.678 "data_offset": 0, 00:24:41.678 "data_size": 65536 00:24:41.678 } 00:24:41.678 ] 00:24:41.678 }' 00:24:41.678 13:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:41.678 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.678 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:41.678 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.678 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:41.937 [2024-07-25 13:24:52.291219] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:41.937 [2024-07-25 13:24:52.353926] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:41.937 [2024-07-25 13:24:52.353966] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.937 [2024-07-25 13:24:52.353982] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:41.937 [2024-07-25 13:24:52.353989] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:41.937 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:41.937 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:41.937 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:41.937 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:41.937 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:41.937 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:41.937 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:41.937 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:41.937 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:41.937 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:41.937 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.937 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.196 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:42.196 "name": "raid_bdev1", 00:24:42.196 "uuid": "c9a3d059-fb11-4645-a07c-be2ef16e88dc", 00:24:42.196 "strip_size_kb": 0, 00:24:42.196 "state": "online", 00:24:42.196 "raid_level": "raid1", 00:24:42.196 "superblock": false, 00:24:42.196 "num_base_bdevs": 4, 00:24:42.196 "num_base_bdevs_discovered": 3, 00:24:42.196 "num_base_bdevs_operational": 3, 00:24:42.196 "base_bdevs_list": [ 00:24:42.196 { 00:24:42.196 "name": null, 00:24:42.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.196 "is_configured": false, 00:24:42.196 "data_offset": 0, 00:24:42.196 "data_size": 65536 00:24:42.196 }, 00:24:42.196 { 00:24:42.196 "name": "BaseBdev2", 00:24:42.196 "uuid": "e951caff-8108-5eed-a6dd-e967c367e22a", 00:24:42.196 "is_configured": true, 00:24:42.196 "data_offset": 0, 00:24:42.196 "data_size": 65536 00:24:42.196 }, 00:24:42.196 { 00:24:42.196 "name": "BaseBdev3", 00:24:42.196 "uuid": "b10d6676-b707-5a3f-94c6-bad0560e0afa", 00:24:42.196 "is_configured": true, 00:24:42.196 "data_offset": 0, 00:24:42.196 "data_size": 65536 00:24:42.196 }, 00:24:42.196 { 00:24:42.196 "name": "BaseBdev4", 00:24:42.196 "uuid": "602ac775-85ad-54ad-a685-6a0b81597355", 00:24:42.196 "is_configured": true, 00:24:42.196 "data_offset": 0, 00:24:42.196 "data_size": 65536 00:24:42.196 } 00:24:42.196 ] 00:24:42.196 }' 00:24:42.196 13:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:42.196 13:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.763 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:42.763 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:42.763 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:42.763 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:42.763 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:42.763 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.763 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.022 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:43.022 "name": "raid_bdev1", 00:24:43.022 "uuid": "c9a3d059-fb11-4645-a07c-be2ef16e88dc", 00:24:43.022 "strip_size_kb": 0, 00:24:43.022 "state": "online", 00:24:43.022 "raid_level": "raid1", 00:24:43.022 "superblock": false, 00:24:43.022 "num_base_bdevs": 4, 00:24:43.022 "num_base_bdevs_discovered": 3, 00:24:43.022 "num_base_bdevs_operational": 3, 00:24:43.022 "base_bdevs_list": [ 00:24:43.022 { 00:24:43.022 "name": null, 00:24:43.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.023 "is_configured": false, 00:24:43.023 "data_offset": 0, 00:24:43.023 "data_size": 65536 00:24:43.023 }, 00:24:43.023 { 00:24:43.023 "name": "BaseBdev2", 00:24:43.023 "uuid": "e951caff-8108-5eed-a6dd-e967c367e22a", 00:24:43.023 "is_configured": true, 00:24:43.023 "data_offset": 0, 00:24:43.023 "data_size": 65536 00:24:43.023 }, 00:24:43.023 { 00:24:43.023 "name": "BaseBdev3", 00:24:43.023 "uuid": "b10d6676-b707-5a3f-94c6-bad0560e0afa", 00:24:43.023 "is_configured": true, 00:24:43.023 "data_offset": 0, 00:24:43.023 "data_size": 65536 00:24:43.023 }, 00:24:43.023 { 00:24:43.023 "name": "BaseBdev4", 00:24:43.023 "uuid": "602ac775-85ad-54ad-a685-6a0b81597355", 00:24:43.023 "is_configured": true, 00:24:43.023 "data_offset": 0, 00:24:43.023 "data_size": 65536 00:24:43.023 } 00:24:43.023 ] 00:24:43.023 }' 00:24:43.023 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:43.023 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:43.023 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:43.023 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:43.023 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:43.281 [2024-07-25 13:24:53.705385] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:43.281 [2024-07-25 13:24:53.709288] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x170a7d0 00:24:43.281 [2024-07-25 13:24:53.710674] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:43.281 13:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:24:44.657 13:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.657 13:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:44.657 13:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:44.657 13:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:44.657 13:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:44.657 13:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.657 13:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.657 13:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:44.657 "name": "raid_bdev1", 00:24:44.657 "uuid": "c9a3d059-fb11-4645-a07c-be2ef16e88dc", 00:24:44.657 "strip_size_kb": 0, 00:24:44.657 "state": "online", 00:24:44.657 "raid_level": "raid1", 00:24:44.657 "superblock": false, 00:24:44.657 "num_base_bdevs": 4, 00:24:44.657 "num_base_bdevs_discovered": 4, 00:24:44.657 "num_base_bdevs_operational": 4, 00:24:44.657 "process": { 00:24:44.657 "type": "rebuild", 00:24:44.657 "target": "spare", 00:24:44.657 "progress": { 00:24:44.657 "blocks": 24576, 00:24:44.657 "percent": 37 00:24:44.657 } 00:24:44.657 }, 00:24:44.657 "base_bdevs_list": [ 00:24:44.657 { 00:24:44.657 "name": "spare", 00:24:44.657 "uuid": "d5b17be4-af59-5fd7-ac97-659c1c9360f7", 00:24:44.657 "is_configured": true, 00:24:44.657 "data_offset": 0, 00:24:44.657 "data_size": 65536 00:24:44.657 }, 00:24:44.657 { 00:24:44.657 "name": "BaseBdev2", 00:24:44.657 "uuid": "e951caff-8108-5eed-a6dd-e967c367e22a", 00:24:44.657 "is_configured": true, 00:24:44.657 "data_offset": 0, 00:24:44.657 "data_size": 65536 00:24:44.657 }, 00:24:44.657 { 00:24:44.657 "name": "BaseBdev3", 00:24:44.657 "uuid": "b10d6676-b707-5a3f-94c6-bad0560e0afa", 00:24:44.657 "is_configured": true, 00:24:44.657 "data_offset": 0, 00:24:44.657 "data_size": 65536 00:24:44.657 }, 00:24:44.657 { 00:24:44.657 "name": "BaseBdev4", 00:24:44.657 "uuid": "602ac775-85ad-54ad-a685-6a0b81597355", 00:24:44.657 "is_configured": true, 00:24:44.657 "data_offset": 0, 00:24:44.657 "data_size": 65536 00:24:44.657 } 00:24:44.657 ] 00:24:44.657 }' 00:24:44.657 13:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:44.657 13:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:44.657 13:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:44.657 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:44.657 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:24:44.657 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:24:44.657 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:24:44.657 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:24:44.657 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:44.916 [2024-07-25 13:24:55.223555] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:44.916 [2024-07-25 13:24:55.322360] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x170a7d0 00:24:44.916 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:24:44.916 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:24:44.916 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.916 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:44.916 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:44.916 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:44.916 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:44.916 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.916 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:45.175 "name": "raid_bdev1", 00:24:45.175 "uuid": "c9a3d059-fb11-4645-a07c-be2ef16e88dc", 00:24:45.175 "strip_size_kb": 0, 00:24:45.175 "state": "online", 00:24:45.175 "raid_level": "raid1", 00:24:45.175 "superblock": false, 00:24:45.175 "num_base_bdevs": 4, 00:24:45.175 "num_base_bdevs_discovered": 3, 00:24:45.175 "num_base_bdevs_operational": 3, 00:24:45.175 "process": { 00:24:45.175 "type": "rebuild", 00:24:45.175 "target": "spare", 00:24:45.175 "progress": { 00:24:45.175 "blocks": 36864, 00:24:45.175 "percent": 56 00:24:45.175 } 00:24:45.175 }, 00:24:45.175 "base_bdevs_list": [ 00:24:45.175 { 00:24:45.175 "name": "spare", 00:24:45.175 "uuid": "d5b17be4-af59-5fd7-ac97-659c1c9360f7", 00:24:45.175 "is_configured": true, 00:24:45.175 "data_offset": 0, 00:24:45.175 "data_size": 65536 00:24:45.175 }, 00:24:45.175 { 00:24:45.175 "name": null, 00:24:45.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.175 "is_configured": false, 00:24:45.175 "data_offset": 0, 00:24:45.175 "data_size": 65536 00:24:45.175 }, 00:24:45.175 { 00:24:45.175 "name": "BaseBdev3", 00:24:45.175 "uuid": "b10d6676-b707-5a3f-94c6-bad0560e0afa", 00:24:45.175 "is_configured": true, 00:24:45.175 "data_offset": 0, 00:24:45.175 "data_size": 65536 00:24:45.175 }, 00:24:45.175 { 00:24:45.175 "name": "BaseBdev4", 00:24:45.175 "uuid": "602ac775-85ad-54ad-a685-6a0b81597355", 00:24:45.175 "is_configured": true, 00:24:45.175 "data_offset": 0, 00:24:45.175 "data_size": 65536 00:24:45.175 } 00:24:45.175 ] 00:24:45.175 }' 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=850 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.175 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.435 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:45.435 "name": "raid_bdev1", 00:24:45.435 "uuid": "c9a3d059-fb11-4645-a07c-be2ef16e88dc", 00:24:45.435 "strip_size_kb": 0, 00:24:45.435 "state": "online", 00:24:45.435 "raid_level": "raid1", 00:24:45.435 "superblock": false, 00:24:45.435 "num_base_bdevs": 4, 00:24:45.435 "num_base_bdevs_discovered": 3, 00:24:45.435 "num_base_bdevs_operational": 3, 00:24:45.435 "process": { 00:24:45.435 "type": "rebuild", 00:24:45.435 "target": "spare", 00:24:45.435 "progress": { 00:24:45.435 "blocks": 43008, 00:24:45.435 "percent": 65 00:24:45.435 } 00:24:45.435 }, 00:24:45.435 "base_bdevs_list": [ 00:24:45.435 { 00:24:45.435 "name": "spare", 00:24:45.435 "uuid": "d5b17be4-af59-5fd7-ac97-659c1c9360f7", 00:24:45.435 "is_configured": true, 00:24:45.435 "data_offset": 0, 00:24:45.435 "data_size": 65536 00:24:45.435 }, 00:24:45.435 { 00:24:45.435 "name": null, 00:24:45.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.435 "is_configured": false, 00:24:45.435 "data_offset": 0, 00:24:45.435 "data_size": 65536 00:24:45.435 }, 00:24:45.435 { 00:24:45.435 "name": "BaseBdev3", 00:24:45.435 "uuid": "b10d6676-b707-5a3f-94c6-bad0560e0afa", 00:24:45.435 "is_configured": true, 00:24:45.435 "data_offset": 0, 00:24:45.435 "data_size": 65536 00:24:45.435 }, 00:24:45.435 { 00:24:45.435 "name": "BaseBdev4", 00:24:45.435 "uuid": "602ac775-85ad-54ad-a685-6a0b81597355", 00:24:45.435 "is_configured": true, 00:24:45.435 "data_offset": 0, 00:24:45.435 "data_size": 65536 00:24:45.435 } 00:24:45.435 ] 00:24:45.435 }' 00:24:45.435 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:45.694 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.694 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:45.694 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.694 13:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:46.629 [2024-07-25 13:24:56.933992] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:46.629 [2024-07-25 13:24:56.934044] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:46.629 [2024-07-25 13:24:56.934079] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:46.629 13:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:46.629 13:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:46.629 13:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:46.629 13:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:46.629 13:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:46.629 13:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:46.629 13:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.629 13:24:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:46.888 "name": "raid_bdev1", 00:24:46.888 "uuid": "c9a3d059-fb11-4645-a07c-be2ef16e88dc", 00:24:46.888 "strip_size_kb": 0, 00:24:46.888 "state": "online", 00:24:46.888 "raid_level": "raid1", 00:24:46.888 "superblock": false, 00:24:46.888 "num_base_bdevs": 4, 00:24:46.888 "num_base_bdevs_discovered": 3, 00:24:46.888 "num_base_bdevs_operational": 3, 00:24:46.888 "base_bdevs_list": [ 00:24:46.888 { 00:24:46.888 "name": "spare", 00:24:46.888 "uuid": "d5b17be4-af59-5fd7-ac97-659c1c9360f7", 00:24:46.888 "is_configured": true, 00:24:46.888 "data_offset": 0, 00:24:46.888 "data_size": 65536 00:24:46.888 }, 00:24:46.888 { 00:24:46.888 "name": null, 00:24:46.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.888 "is_configured": false, 00:24:46.888 "data_offset": 0, 00:24:46.888 "data_size": 65536 00:24:46.888 }, 00:24:46.888 { 00:24:46.888 "name": "BaseBdev3", 00:24:46.888 "uuid": "b10d6676-b707-5a3f-94c6-bad0560e0afa", 00:24:46.888 "is_configured": true, 00:24:46.888 "data_offset": 0, 00:24:46.888 "data_size": 65536 00:24:46.888 }, 00:24:46.888 { 00:24:46.888 "name": "BaseBdev4", 00:24:46.888 "uuid": "602ac775-85ad-54ad-a685-6a0b81597355", 00:24:46.888 "is_configured": true, 00:24:46.888 "data_offset": 0, 00:24:46.888 "data_size": 65536 00:24:46.888 } 00:24:46.888 ] 00:24:46.888 }' 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.888 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:47.147 "name": "raid_bdev1", 00:24:47.147 "uuid": "c9a3d059-fb11-4645-a07c-be2ef16e88dc", 00:24:47.147 "strip_size_kb": 0, 00:24:47.147 "state": "online", 00:24:47.147 "raid_level": "raid1", 00:24:47.147 "superblock": false, 00:24:47.147 "num_base_bdevs": 4, 00:24:47.147 "num_base_bdevs_discovered": 3, 00:24:47.147 "num_base_bdevs_operational": 3, 00:24:47.147 "base_bdevs_list": [ 00:24:47.147 { 00:24:47.147 "name": "spare", 00:24:47.147 "uuid": "d5b17be4-af59-5fd7-ac97-659c1c9360f7", 00:24:47.147 "is_configured": true, 00:24:47.147 "data_offset": 0, 00:24:47.147 "data_size": 65536 00:24:47.147 }, 00:24:47.147 { 00:24:47.147 "name": null, 00:24:47.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.147 "is_configured": false, 00:24:47.147 "data_offset": 0, 00:24:47.147 "data_size": 65536 00:24:47.147 }, 00:24:47.147 { 00:24:47.147 "name": "BaseBdev3", 00:24:47.147 "uuid": "b10d6676-b707-5a3f-94c6-bad0560e0afa", 00:24:47.147 "is_configured": true, 00:24:47.147 "data_offset": 0, 00:24:47.147 "data_size": 65536 00:24:47.147 }, 00:24:47.147 { 00:24:47.147 "name": "BaseBdev4", 00:24:47.147 "uuid": "602ac775-85ad-54ad-a685-6a0b81597355", 00:24:47.147 "is_configured": true, 00:24:47.147 "data_offset": 0, 00:24:47.147 "data_size": 65536 00:24:47.147 } 00:24:47.147 ] 00:24:47.147 }' 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.147 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.406 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:47.406 "name": "raid_bdev1", 00:24:47.406 "uuid": "c9a3d059-fb11-4645-a07c-be2ef16e88dc", 00:24:47.406 "strip_size_kb": 0, 00:24:47.406 "state": "online", 00:24:47.406 "raid_level": "raid1", 00:24:47.406 "superblock": false, 00:24:47.406 "num_base_bdevs": 4, 00:24:47.406 "num_base_bdevs_discovered": 3, 00:24:47.406 "num_base_bdevs_operational": 3, 00:24:47.406 "base_bdevs_list": [ 00:24:47.406 { 00:24:47.406 "name": "spare", 00:24:47.406 "uuid": "d5b17be4-af59-5fd7-ac97-659c1c9360f7", 00:24:47.406 "is_configured": true, 00:24:47.406 "data_offset": 0, 00:24:47.406 "data_size": 65536 00:24:47.406 }, 00:24:47.406 { 00:24:47.406 "name": null, 00:24:47.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.406 "is_configured": false, 00:24:47.406 "data_offset": 0, 00:24:47.406 "data_size": 65536 00:24:47.406 }, 00:24:47.406 { 00:24:47.406 "name": "BaseBdev3", 00:24:47.406 "uuid": "b10d6676-b707-5a3f-94c6-bad0560e0afa", 00:24:47.406 "is_configured": true, 00:24:47.406 "data_offset": 0, 00:24:47.406 "data_size": 65536 00:24:47.406 }, 00:24:47.406 { 00:24:47.406 "name": "BaseBdev4", 00:24:47.406 "uuid": "602ac775-85ad-54ad-a685-6a0b81597355", 00:24:47.406 "is_configured": true, 00:24:47.406 "data_offset": 0, 00:24:47.406 "data_size": 65536 00:24:47.406 } 00:24:47.406 ] 00:24:47.406 }' 00:24:47.406 13:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:47.406 13:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.971 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:48.230 [2024-07-25 13:24:58.617786] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:48.230 [2024-07-25 13:24:58.617811] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:48.230 [2024-07-25 13:24:58.617860] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:48.230 [2024-07-25 13:24:58.617921] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:48.230 [2024-07-25 13:24:58.617932] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1568e80 name raid_bdev1, state offline 00:24:48.230 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.230 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:48.504 13:24:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:48.769 /dev/nbd0 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:48.769 1+0 records in 00:24:48.769 1+0 records out 00:24:48.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235355 s, 17.4 MB/s 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:48.769 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:49.027 /dev/nbd1 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.027 1+0 records in 00:24:49.027 1+0 records out 00:24:49.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294127 s, 13.9 MB/s 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:49.027 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:49.286 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:49.286 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:49.286 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:49.286 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:49.286 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:49.286 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:49.286 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:49.286 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:49.286 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:49.286 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 974379 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 974379 ']' 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 974379 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:49.545 13:24:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 974379 00:24:49.803 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:49.803 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:49.803 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 974379' 00:24:49.803 killing process with pid 974379 00:24:49.803 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 974379 00:24:49.803 Received shutdown signal, test time was about 60.000000 seconds 00:24:49.803 00:24:49.803 Latency(us) 00:24:49.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.803 =================================================================================================================== 00:24:49.803 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:49.803 [2024-07-25 13:25:00.035887] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:49.803 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 974379 00:24:49.803 [2024-07-25 13:25:00.074179] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:49.803 13:25:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:24:49.803 00:24:49.803 real 0m24.261s 00:24:49.803 user 0m32.984s 00:24:49.803 sys 0m4.978s 00:24:49.803 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:49.803 13:25:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.803 ************************************ 00:24:49.803 END TEST raid_rebuild_test 00:24:49.803 ************************************ 00:24:50.061 13:25:00 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:24:50.061 13:25:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:50.061 13:25:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:50.061 13:25:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:50.061 ************************************ 00:24:50.061 START TEST raid_rebuild_test_sb 00:24:50.061 ************************************ 00:24:50.061 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:24:50.061 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:24:50.061 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:24:50.061 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:24:50.061 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:24:50.061 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:24:50.061 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:24:50.061 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:50.061 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:24:50.061 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=978646 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 978646 /var/tmp/spdk-raid.sock 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 978646 ']' 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:50.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.062 13:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.062 [2024-07-25 13:25:00.421829] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:24:50.062 [2024-07-25 13:25:00.421886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid978646 ] 00:24:50.062 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:50.062 Zero copy mechanism will not be used. 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:50.062 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:50.062 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:50.321 [2024-07-25 13:25:00.552701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.321 [2024-07-25 13:25:00.639126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.321 [2024-07-25 13:25:00.703968] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:50.321 [2024-07-25 13:25:00.703999] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:50.885 13:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:50.885 13:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:24:50.885 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:50.885 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:51.143 BaseBdev1_malloc 00:24:51.143 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:51.401 [2024-07-25 13:25:01.764421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:51.401 [2024-07-25 13:25:01.764464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.401 [2024-07-25 13:25:01.764488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13c35f0 00:24:51.401 [2024-07-25 13:25:01.764500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.401 [2024-07-25 13:25:01.765996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.401 [2024-07-25 13:25:01.766023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:51.401 BaseBdev1 00:24:51.401 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:51.401 13:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:51.658 BaseBdev2_malloc 00:24:51.658 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:51.917 [2024-07-25 13:25:02.206105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:51.917 [2024-07-25 13:25:02.206151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.917 [2024-07-25 13:25:02.206170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1566fd0 00:24:51.917 [2024-07-25 13:25:02.206181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.917 [2024-07-25 13:25:02.207558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.917 [2024-07-25 13:25:02.207584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:51.917 BaseBdev2 00:24:51.917 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:51.917 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:52.175 BaseBdev3_malloc 00:24:52.175 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:52.175 [2024-07-25 13:25:02.651599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:52.175 [2024-07-25 13:25:02.651640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.175 [2024-07-25 13:25:02.651659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x155cda0 00:24:52.175 [2024-07-25 13:25:02.651670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.175 [2024-07-25 13:25:02.653025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.175 [2024-07-25 13:25:02.653050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:52.175 BaseBdev3 00:24:52.434 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:52.434 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:52.434 BaseBdev4_malloc 00:24:52.434 13:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:52.693 [2024-07-25 13:25:03.109005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:52.693 [2024-07-25 13:25:03.109046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.693 [2024-07-25 13:25:03.109064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13bb290 00:24:52.693 [2024-07-25 13:25:03.109076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.693 [2024-07-25 13:25:03.110415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.694 [2024-07-25 13:25:03.110440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:52.694 BaseBdev4 00:24:52.694 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:52.951 spare_malloc 00:24:52.951 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:53.210 spare_delay 00:24:53.210 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:53.467 [2024-07-25 13:25:03.794980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:53.467 [2024-07-25 13:25:03.795021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.467 [2024-07-25 13:25:03.795042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13bdeb0 00:24:53.467 [2024-07-25 13:25:03.795053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.467 [2024-07-25 13:25:03.796447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.467 [2024-07-25 13:25:03.796473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:53.467 spare 00:24:53.467 13:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:53.725 [2024-07-25 13:25:04.023614] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:53.725 [2024-07-25 13:25:04.024837] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:53.725 [2024-07-25 13:25:04.024887] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:53.725 [2024-07-25 13:25:04.024929] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:53.725 [2024-07-25 13:25:04.025094] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x13bae80 00:24:53.725 [2024-07-25 13:25:04.025105] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:53.725 [2024-07-25 13:25:04.025299] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13bad90 00:24:53.725 [2024-07-25 13:25:04.025434] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13bae80 00:24:53.725 [2024-07-25 13:25:04.025443] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13bae80 00:24:53.725 [2024-07-25 13:25:04.025545] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:53.725 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:53.725 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:53.725 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:53.725 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:53.725 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:53.725 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:53.725 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:53.725 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:53.725 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:53.725 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:53.725 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.725 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.983 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:53.983 "name": "raid_bdev1", 00:24:53.983 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:24:53.983 "strip_size_kb": 0, 00:24:53.983 "state": "online", 00:24:53.983 "raid_level": "raid1", 00:24:53.983 "superblock": true, 00:24:53.983 "num_base_bdevs": 4, 00:24:53.983 "num_base_bdevs_discovered": 4, 00:24:53.983 "num_base_bdevs_operational": 4, 00:24:53.983 "base_bdevs_list": [ 00:24:53.983 { 00:24:53.983 "name": "BaseBdev1", 00:24:53.983 "uuid": "649ab3ce-c717-545b-ad61-da6aeec31fea", 00:24:53.983 "is_configured": true, 00:24:53.983 "data_offset": 2048, 00:24:53.983 "data_size": 63488 00:24:53.983 }, 00:24:53.983 { 00:24:53.983 "name": "BaseBdev2", 00:24:53.983 "uuid": "35e8ef30-366e-5b5b-8d80-c272d5c6bd8a", 00:24:53.983 "is_configured": true, 00:24:53.983 "data_offset": 2048, 00:24:53.983 "data_size": 63488 00:24:53.983 }, 00:24:53.983 { 00:24:53.983 "name": "BaseBdev3", 00:24:53.983 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:24:53.983 "is_configured": true, 00:24:53.983 "data_offset": 2048, 00:24:53.983 "data_size": 63488 00:24:53.983 }, 00:24:53.983 { 00:24:53.983 "name": "BaseBdev4", 00:24:53.983 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:24:53.983 "is_configured": true, 00:24:53.983 "data_offset": 2048, 00:24:53.983 "data_size": 63488 00:24:53.983 } 00:24:53.983 ] 00:24:53.983 }' 00:24:53.983 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:53.983 13:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:54.548 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:54.548 13:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:24:54.806 [2024-07-25 13:25:05.042581] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:54.806 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:24:54.806 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.806 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:55.064 [2024-07-25 13:25:05.503562] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13bad90 00:24:55.064 /dev/nbd0 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:55.064 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:55.322 1+0 records in 00:24:55.322 1+0 records out 00:24:55.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024742 s, 16.6 MB/s 00:24:55.322 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:55.322 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:24:55.322 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:55.322 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:55.322 13:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:24:55.322 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:55.322 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:55.322 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:24:55.322 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:24:55.322 13:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:25:01.883 63488+0 records in 00:25:01.883 63488+0 records out 00:25:01.883 32505856 bytes (33 MB, 31 MiB) copied, 6.02819 s, 5.4 MB/s 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:01.883 [2024-07-25 13:25:11.847811] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:01.883 13:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:01.883 [2024-07-25 13:25:12.068432] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:01.883 "name": "raid_bdev1", 00:25:01.883 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:01.883 "strip_size_kb": 0, 00:25:01.883 "state": "online", 00:25:01.883 "raid_level": "raid1", 00:25:01.883 "superblock": true, 00:25:01.883 "num_base_bdevs": 4, 00:25:01.883 "num_base_bdevs_discovered": 3, 00:25:01.883 "num_base_bdevs_operational": 3, 00:25:01.883 "base_bdevs_list": [ 00:25:01.883 { 00:25:01.883 "name": null, 00:25:01.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.883 "is_configured": false, 00:25:01.883 "data_offset": 2048, 00:25:01.883 "data_size": 63488 00:25:01.883 }, 00:25:01.883 { 00:25:01.883 "name": "BaseBdev2", 00:25:01.883 "uuid": "35e8ef30-366e-5b5b-8d80-c272d5c6bd8a", 00:25:01.883 "is_configured": true, 00:25:01.883 "data_offset": 2048, 00:25:01.883 "data_size": 63488 00:25:01.883 }, 00:25:01.883 { 00:25:01.883 "name": "BaseBdev3", 00:25:01.883 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:01.883 "is_configured": true, 00:25:01.883 "data_offset": 2048, 00:25:01.883 "data_size": 63488 00:25:01.883 }, 00:25:01.883 { 00:25:01.883 "name": "BaseBdev4", 00:25:01.883 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:01.883 "is_configured": true, 00:25:01.883 "data_offset": 2048, 00:25:01.883 "data_size": 63488 00:25:01.883 } 00:25:01.883 ] 00:25:01.883 }' 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:01.883 13:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:02.449 13:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:02.707 [2024-07-25 13:25:13.063044] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:02.707 [2024-07-25 13:25:13.067165] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x155b5d0 00:25:02.707 [2024-07-25 13:25:13.069253] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:02.707 13:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:03.640 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:03.640 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:03.640 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:03.640 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:03.640 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:03.640 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.640 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.897 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:03.897 "name": "raid_bdev1", 00:25:03.897 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:03.897 "strip_size_kb": 0, 00:25:03.897 "state": "online", 00:25:03.897 "raid_level": "raid1", 00:25:03.897 "superblock": true, 00:25:03.897 "num_base_bdevs": 4, 00:25:03.897 "num_base_bdevs_discovered": 4, 00:25:03.897 "num_base_bdevs_operational": 4, 00:25:03.897 "process": { 00:25:03.897 "type": "rebuild", 00:25:03.897 "target": "spare", 00:25:03.897 "progress": { 00:25:03.897 "blocks": 24576, 00:25:03.897 "percent": 38 00:25:03.897 } 00:25:03.897 }, 00:25:03.897 "base_bdevs_list": [ 00:25:03.897 { 00:25:03.897 "name": "spare", 00:25:03.897 "uuid": "acf060b5-f704-59c9-8a35-85a219d3046c", 00:25:03.897 "is_configured": true, 00:25:03.897 "data_offset": 2048, 00:25:03.897 "data_size": 63488 00:25:03.897 }, 00:25:03.897 { 00:25:03.897 "name": "BaseBdev2", 00:25:03.897 "uuid": "35e8ef30-366e-5b5b-8d80-c272d5c6bd8a", 00:25:03.897 "is_configured": true, 00:25:03.897 "data_offset": 2048, 00:25:03.897 "data_size": 63488 00:25:03.897 }, 00:25:03.897 { 00:25:03.897 "name": "BaseBdev3", 00:25:03.897 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:03.897 "is_configured": true, 00:25:03.897 "data_offset": 2048, 00:25:03.897 "data_size": 63488 00:25:03.897 }, 00:25:03.897 { 00:25:03.897 "name": "BaseBdev4", 00:25:03.897 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:03.897 "is_configured": true, 00:25:03.897 "data_offset": 2048, 00:25:03.897 "data_size": 63488 00:25:03.897 } 00:25:03.897 ] 00:25:03.897 }' 00:25:03.897 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:03.897 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:03.897 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:04.154 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:04.154 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:04.154 [2024-07-25 13:25:14.620802] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:04.412 [2024-07-25 13:25:14.680965] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:04.412 [2024-07-25 13:25:14.681006] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:04.412 [2024-07-25 13:25:14.681022] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:04.412 [2024-07-25 13:25:14.681029] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:04.412 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:04.412 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:04.412 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:04.412 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:04.412 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:04.412 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:04.412 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:04.412 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:04.412 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:04.412 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:04.412 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.412 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.668 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:04.669 "name": "raid_bdev1", 00:25:04.669 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:04.669 "strip_size_kb": 0, 00:25:04.669 "state": "online", 00:25:04.669 "raid_level": "raid1", 00:25:04.669 "superblock": true, 00:25:04.669 "num_base_bdevs": 4, 00:25:04.669 "num_base_bdevs_discovered": 3, 00:25:04.669 "num_base_bdevs_operational": 3, 00:25:04.669 "base_bdevs_list": [ 00:25:04.669 { 00:25:04.669 "name": null, 00:25:04.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.669 "is_configured": false, 00:25:04.669 "data_offset": 2048, 00:25:04.669 "data_size": 63488 00:25:04.669 }, 00:25:04.669 { 00:25:04.669 "name": "BaseBdev2", 00:25:04.669 "uuid": "35e8ef30-366e-5b5b-8d80-c272d5c6bd8a", 00:25:04.669 "is_configured": true, 00:25:04.669 "data_offset": 2048, 00:25:04.669 "data_size": 63488 00:25:04.669 }, 00:25:04.669 { 00:25:04.669 "name": "BaseBdev3", 00:25:04.669 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:04.669 "is_configured": true, 00:25:04.669 "data_offset": 2048, 00:25:04.669 "data_size": 63488 00:25:04.669 }, 00:25:04.669 { 00:25:04.669 "name": "BaseBdev4", 00:25:04.669 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:04.669 "is_configured": true, 00:25:04.669 "data_offset": 2048, 00:25:04.669 "data_size": 63488 00:25:04.669 } 00:25:04.669 ] 00:25:04.669 }' 00:25:04.669 13:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:04.669 13:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.233 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:05.233 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:05.233 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:05.233 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:05.233 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:05.233 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.233 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.233 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:05.233 "name": "raid_bdev1", 00:25:05.233 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:05.233 "strip_size_kb": 0, 00:25:05.233 "state": "online", 00:25:05.233 "raid_level": "raid1", 00:25:05.233 "superblock": true, 00:25:05.233 "num_base_bdevs": 4, 00:25:05.233 "num_base_bdevs_discovered": 3, 00:25:05.233 "num_base_bdevs_operational": 3, 00:25:05.233 "base_bdevs_list": [ 00:25:05.233 { 00:25:05.233 "name": null, 00:25:05.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.233 "is_configured": false, 00:25:05.233 "data_offset": 2048, 00:25:05.233 "data_size": 63488 00:25:05.233 }, 00:25:05.233 { 00:25:05.233 "name": "BaseBdev2", 00:25:05.233 "uuid": "35e8ef30-366e-5b5b-8d80-c272d5c6bd8a", 00:25:05.233 "is_configured": true, 00:25:05.233 "data_offset": 2048, 00:25:05.233 "data_size": 63488 00:25:05.233 }, 00:25:05.233 { 00:25:05.233 "name": "BaseBdev3", 00:25:05.233 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:05.233 "is_configured": true, 00:25:05.233 "data_offset": 2048, 00:25:05.233 "data_size": 63488 00:25:05.233 }, 00:25:05.233 { 00:25:05.233 "name": "BaseBdev4", 00:25:05.233 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:05.233 "is_configured": true, 00:25:05.233 "data_offset": 2048, 00:25:05.233 "data_size": 63488 00:25:05.233 } 00:25:05.233 ] 00:25:05.233 }' 00:25:05.233 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:05.491 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:05.491 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:05.491 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:05.491 13:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:05.748 [2024-07-25 13:25:16.000555] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:05.748 [2024-07-25 13:25:16.004387] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13bade0 00:25:05.748 [2024-07-25 13:25:16.005770] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:05.748 13:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:06.701 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:06.701 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:06.701 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:06.701 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:06.701 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:06.701 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.701 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.986 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:06.986 "name": "raid_bdev1", 00:25:06.986 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:06.986 "strip_size_kb": 0, 00:25:06.986 "state": "online", 00:25:06.986 "raid_level": "raid1", 00:25:06.986 "superblock": true, 00:25:06.986 "num_base_bdevs": 4, 00:25:06.986 "num_base_bdevs_discovered": 4, 00:25:06.986 "num_base_bdevs_operational": 4, 00:25:06.986 "process": { 00:25:06.986 "type": "rebuild", 00:25:06.986 "target": "spare", 00:25:06.986 "progress": { 00:25:06.986 "blocks": 24576, 00:25:06.986 "percent": 38 00:25:06.986 } 00:25:06.986 }, 00:25:06.986 "base_bdevs_list": [ 00:25:06.986 { 00:25:06.986 "name": "spare", 00:25:06.986 "uuid": "acf060b5-f704-59c9-8a35-85a219d3046c", 00:25:06.986 "is_configured": true, 00:25:06.986 "data_offset": 2048, 00:25:06.986 "data_size": 63488 00:25:06.986 }, 00:25:06.986 { 00:25:06.986 "name": "BaseBdev2", 00:25:06.986 "uuid": "35e8ef30-366e-5b5b-8d80-c272d5c6bd8a", 00:25:06.986 "is_configured": true, 00:25:06.986 "data_offset": 2048, 00:25:06.986 "data_size": 63488 00:25:06.986 }, 00:25:06.986 { 00:25:06.986 "name": "BaseBdev3", 00:25:06.986 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:06.986 "is_configured": true, 00:25:06.986 "data_offset": 2048, 00:25:06.986 "data_size": 63488 00:25:06.986 }, 00:25:06.986 { 00:25:06.986 "name": "BaseBdev4", 00:25:06.986 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:06.986 "is_configured": true, 00:25:06.986 "data_offset": 2048, 00:25:06.986 "data_size": 63488 00:25:06.986 } 00:25:06.986 ] 00:25:06.986 }' 00:25:06.986 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:06.986 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:06.986 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:06.986 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:06.986 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:25:06.986 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:25:06.986 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:25:06.986 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:25:06.986 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:25:06.986 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:25:06.986 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:07.244 [2024-07-25 13:25:17.570839] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:07.244 [2024-07-25 13:25:17.717755] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x13bade0 00:25:07.502 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:25:07.502 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:25:07.502 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:07.502 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:07.502 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:07.502 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:07.502 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:07.502 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.502 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.502 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:07.502 "name": "raid_bdev1", 00:25:07.502 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:07.502 "strip_size_kb": 0, 00:25:07.502 "state": "online", 00:25:07.502 "raid_level": "raid1", 00:25:07.502 "superblock": true, 00:25:07.502 "num_base_bdevs": 4, 00:25:07.502 "num_base_bdevs_discovered": 3, 00:25:07.502 "num_base_bdevs_operational": 3, 00:25:07.502 "process": { 00:25:07.502 "type": "rebuild", 00:25:07.502 "target": "spare", 00:25:07.502 "progress": { 00:25:07.502 "blocks": 36864, 00:25:07.502 "percent": 58 00:25:07.502 } 00:25:07.502 }, 00:25:07.502 "base_bdevs_list": [ 00:25:07.502 { 00:25:07.502 "name": "spare", 00:25:07.502 "uuid": "acf060b5-f704-59c9-8a35-85a219d3046c", 00:25:07.502 "is_configured": true, 00:25:07.502 "data_offset": 2048, 00:25:07.502 "data_size": 63488 00:25:07.502 }, 00:25:07.502 { 00:25:07.502 "name": null, 00:25:07.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.502 "is_configured": false, 00:25:07.502 "data_offset": 2048, 00:25:07.502 "data_size": 63488 00:25:07.502 }, 00:25:07.502 { 00:25:07.502 "name": "BaseBdev3", 00:25:07.502 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:07.502 "is_configured": true, 00:25:07.502 "data_offset": 2048, 00:25:07.502 "data_size": 63488 00:25:07.502 }, 00:25:07.502 { 00:25:07.502 "name": "BaseBdev4", 00:25:07.502 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:07.502 "is_configured": true, 00:25:07.502 "data_offset": 2048, 00:25:07.502 "data_size": 63488 00:25:07.502 } 00:25:07.502 ] 00:25:07.502 }' 00:25:07.502 13:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:07.760 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:07.760 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:07.760 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:07.760 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=873 00:25:07.760 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:07.760 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:07.760 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:07.760 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:07.760 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:07.760 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:07.760 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.760 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.018 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:08.018 "name": "raid_bdev1", 00:25:08.018 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:08.018 "strip_size_kb": 0, 00:25:08.018 "state": "online", 00:25:08.018 "raid_level": "raid1", 00:25:08.018 "superblock": true, 00:25:08.018 "num_base_bdevs": 4, 00:25:08.018 "num_base_bdevs_discovered": 3, 00:25:08.018 "num_base_bdevs_operational": 3, 00:25:08.018 "process": { 00:25:08.018 "type": "rebuild", 00:25:08.018 "target": "spare", 00:25:08.018 "progress": { 00:25:08.018 "blocks": 43008, 00:25:08.018 "percent": 67 00:25:08.018 } 00:25:08.018 }, 00:25:08.018 "base_bdevs_list": [ 00:25:08.018 { 00:25:08.018 "name": "spare", 00:25:08.018 "uuid": "acf060b5-f704-59c9-8a35-85a219d3046c", 00:25:08.018 "is_configured": true, 00:25:08.018 "data_offset": 2048, 00:25:08.018 "data_size": 63488 00:25:08.018 }, 00:25:08.018 { 00:25:08.018 "name": null, 00:25:08.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.018 "is_configured": false, 00:25:08.018 "data_offset": 2048, 00:25:08.018 "data_size": 63488 00:25:08.018 }, 00:25:08.018 { 00:25:08.018 "name": "BaseBdev3", 00:25:08.018 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:08.018 "is_configured": true, 00:25:08.018 "data_offset": 2048, 00:25:08.018 "data_size": 63488 00:25:08.018 }, 00:25:08.018 { 00:25:08.018 "name": "BaseBdev4", 00:25:08.018 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:08.018 "is_configured": true, 00:25:08.018 "data_offset": 2048, 00:25:08.018 "data_size": 63488 00:25:08.018 } 00:25:08.018 ] 00:25:08.018 }' 00:25:08.018 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:08.018 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:08.018 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:08.018 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:08.018 13:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:08.951 [2024-07-25 13:25:19.228793] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:08.951 [2024-07-25 13:25:19.228844] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:08.951 [2024-07-25 13:25:19.228932] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.951 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:08.951 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:08.951 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:08.951 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:08.951 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:08.951 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:08.951 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.951 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.209 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:09.209 "name": "raid_bdev1", 00:25:09.209 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:09.209 "strip_size_kb": 0, 00:25:09.209 "state": "online", 00:25:09.209 "raid_level": "raid1", 00:25:09.209 "superblock": true, 00:25:09.209 "num_base_bdevs": 4, 00:25:09.209 "num_base_bdevs_discovered": 3, 00:25:09.209 "num_base_bdevs_operational": 3, 00:25:09.209 "base_bdevs_list": [ 00:25:09.209 { 00:25:09.209 "name": "spare", 00:25:09.209 "uuid": "acf060b5-f704-59c9-8a35-85a219d3046c", 00:25:09.209 "is_configured": true, 00:25:09.209 "data_offset": 2048, 00:25:09.209 "data_size": 63488 00:25:09.209 }, 00:25:09.209 { 00:25:09.209 "name": null, 00:25:09.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.209 "is_configured": false, 00:25:09.209 "data_offset": 2048, 00:25:09.209 "data_size": 63488 00:25:09.209 }, 00:25:09.209 { 00:25:09.209 "name": "BaseBdev3", 00:25:09.209 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:09.209 "is_configured": true, 00:25:09.209 "data_offset": 2048, 00:25:09.209 "data_size": 63488 00:25:09.209 }, 00:25:09.209 { 00:25:09.209 "name": "BaseBdev4", 00:25:09.209 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:09.209 "is_configured": true, 00:25:09.209 "data_offset": 2048, 00:25:09.209 "data_size": 63488 00:25:09.209 } 00:25:09.209 ] 00:25:09.209 }' 00:25:09.209 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:09.209 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:09.209 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:09.467 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:09.467 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:25:09.467 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:09.467 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:09.467 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:09.467 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:09.467 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:09.467 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.467 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.467 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:09.467 "name": "raid_bdev1", 00:25:09.467 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:09.467 "strip_size_kb": 0, 00:25:09.467 "state": "online", 00:25:09.467 "raid_level": "raid1", 00:25:09.467 "superblock": true, 00:25:09.467 "num_base_bdevs": 4, 00:25:09.467 "num_base_bdevs_discovered": 3, 00:25:09.467 "num_base_bdevs_operational": 3, 00:25:09.467 "base_bdevs_list": [ 00:25:09.467 { 00:25:09.467 "name": "spare", 00:25:09.467 "uuid": "acf060b5-f704-59c9-8a35-85a219d3046c", 00:25:09.467 "is_configured": true, 00:25:09.467 "data_offset": 2048, 00:25:09.467 "data_size": 63488 00:25:09.467 }, 00:25:09.467 { 00:25:09.468 "name": null, 00:25:09.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.468 "is_configured": false, 00:25:09.468 "data_offset": 2048, 00:25:09.468 "data_size": 63488 00:25:09.468 }, 00:25:09.468 { 00:25:09.468 "name": "BaseBdev3", 00:25:09.468 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:09.468 "is_configured": true, 00:25:09.468 "data_offset": 2048, 00:25:09.468 "data_size": 63488 00:25:09.468 }, 00:25:09.468 { 00:25:09.468 "name": "BaseBdev4", 00:25:09.468 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:09.468 "is_configured": true, 00:25:09.468 "data_offset": 2048, 00:25:09.468 "data_size": 63488 00:25:09.468 } 00:25:09.468 ] 00:25:09.468 }' 00:25:09.468 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:09.725 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:09.725 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:09.725 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:09.725 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:09.725 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:09.725 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:09.725 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:09.725 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:09.725 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:09.725 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.725 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.726 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.726 13:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.726 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.726 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.983 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.983 "name": "raid_bdev1", 00:25:09.983 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:09.983 "strip_size_kb": 0, 00:25:09.983 "state": "online", 00:25:09.983 "raid_level": "raid1", 00:25:09.983 "superblock": true, 00:25:09.983 "num_base_bdevs": 4, 00:25:09.983 "num_base_bdevs_discovered": 3, 00:25:09.983 "num_base_bdevs_operational": 3, 00:25:09.983 "base_bdevs_list": [ 00:25:09.983 { 00:25:09.983 "name": "spare", 00:25:09.983 "uuid": "acf060b5-f704-59c9-8a35-85a219d3046c", 00:25:09.983 "is_configured": true, 00:25:09.983 "data_offset": 2048, 00:25:09.983 "data_size": 63488 00:25:09.983 }, 00:25:09.983 { 00:25:09.983 "name": null, 00:25:09.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.983 "is_configured": false, 00:25:09.983 "data_offset": 2048, 00:25:09.983 "data_size": 63488 00:25:09.983 }, 00:25:09.983 { 00:25:09.983 "name": "BaseBdev3", 00:25:09.983 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:09.983 "is_configured": true, 00:25:09.983 "data_offset": 2048, 00:25:09.983 "data_size": 63488 00:25:09.983 }, 00:25:09.983 { 00:25:09.984 "name": "BaseBdev4", 00:25:09.984 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:09.984 "is_configured": true, 00:25:09.984 "data_offset": 2048, 00:25:09.984 "data_size": 63488 00:25:09.984 } 00:25:09.984 ] 00:25:09.984 }' 00:25:09.984 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.984 13:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.549 13:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:10.549 [2024-07-25 13:25:20.981156] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:10.549 [2024-07-25 13:25:20.981180] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:10.549 [2024-07-25 13:25:20.981230] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:10.549 [2024-07-25 13:25:20.981292] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:10.549 [2024-07-25 13:25:20.981303] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13bae80 name raid_bdev1, state offline 00:25:10.549 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.549 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:10.807 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:11.065 /dev/nbd0 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:11.065 1+0 records in 00:25:11.065 1+0 records out 00:25:11.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236564 s, 17.3 MB/s 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:11.065 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:11.323 /dev/nbd1 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:11.323 1+0 records in 00:25:11.323 1+0 records out 00:25:11.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003204 s, 12.8 MB/s 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:11.323 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:11.581 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:11.581 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:11.581 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:11.581 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:11.581 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:11.581 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:11.581 13:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:11.839 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:11.839 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:11.839 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:11.839 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:11.839 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:11.839 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:11.839 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:11.839 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:11.839 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:11.839 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:12.097 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:12.097 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:12.097 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:12.097 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:12.097 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:12.097 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:12.097 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:12.097 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:12.097 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:25:12.097 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:12.097 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:12.356 [2024-07-25 13:25:22.790056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:12.356 [2024-07-25 13:25:22.790099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.356 [2024-07-25 13:25:22.790119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13c0270 00:25:12.356 [2024-07-25 13:25:22.790131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.356 [2024-07-25 13:25:22.791649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.356 [2024-07-25 13:25:22.791676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:12.356 [2024-07-25 13:25:22.791744] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:12.356 [2024-07-25 13:25:22.791767] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:12.356 [2024-07-25 13:25:22.791860] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:12.356 [2024-07-25 13:25:22.791925] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:12.356 spare 00:25:12.356 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:12.356 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:12.356 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:12.356 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:12.356 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:12.356 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:12.356 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:12.356 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:12.356 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:12.356 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:12.356 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.356 13:25:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.615 [2024-07-25 13:25:22.892236] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1458070 00:25:12.615 [2024-07-25 13:25:22.892253] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:12.615 [2024-07-25 13:25:22.892440] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15680d0 00:25:12.615 [2024-07-25 13:25:22.892584] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1458070 00:25:12.615 [2024-07-25 13:25:22.892593] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1458070 00:25:12.615 [2024-07-25 13:25:22.892695] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.615 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:12.615 "name": "raid_bdev1", 00:25:12.615 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:12.615 "strip_size_kb": 0, 00:25:12.615 "state": "online", 00:25:12.615 "raid_level": "raid1", 00:25:12.615 "superblock": true, 00:25:12.615 "num_base_bdevs": 4, 00:25:12.615 "num_base_bdevs_discovered": 3, 00:25:12.615 "num_base_bdevs_operational": 3, 00:25:12.615 "base_bdevs_list": [ 00:25:12.615 { 00:25:12.615 "name": "spare", 00:25:12.615 "uuid": "acf060b5-f704-59c9-8a35-85a219d3046c", 00:25:12.615 "is_configured": true, 00:25:12.615 "data_offset": 2048, 00:25:12.615 "data_size": 63488 00:25:12.615 }, 00:25:12.615 { 00:25:12.615 "name": null, 00:25:12.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.615 "is_configured": false, 00:25:12.615 "data_offset": 2048, 00:25:12.615 "data_size": 63488 00:25:12.615 }, 00:25:12.615 { 00:25:12.615 "name": "BaseBdev3", 00:25:12.615 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:12.615 "is_configured": true, 00:25:12.615 "data_offset": 2048, 00:25:12.615 "data_size": 63488 00:25:12.615 }, 00:25:12.615 { 00:25:12.615 "name": "BaseBdev4", 00:25:12.615 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:12.615 "is_configured": true, 00:25:12.615 "data_offset": 2048, 00:25:12.615 "data_size": 63488 00:25:12.615 } 00:25:12.615 ] 00:25:12.615 }' 00:25:12.615 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:12.615 13:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.183 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:13.183 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:13.183 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:13.183 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:13.183 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:13.183 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.183 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.441 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:13.441 "name": "raid_bdev1", 00:25:13.441 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:13.441 "strip_size_kb": 0, 00:25:13.441 "state": "online", 00:25:13.441 "raid_level": "raid1", 00:25:13.441 "superblock": true, 00:25:13.441 "num_base_bdevs": 4, 00:25:13.441 "num_base_bdevs_discovered": 3, 00:25:13.441 "num_base_bdevs_operational": 3, 00:25:13.441 "base_bdevs_list": [ 00:25:13.441 { 00:25:13.441 "name": "spare", 00:25:13.441 "uuid": "acf060b5-f704-59c9-8a35-85a219d3046c", 00:25:13.441 "is_configured": true, 00:25:13.441 "data_offset": 2048, 00:25:13.441 "data_size": 63488 00:25:13.441 }, 00:25:13.441 { 00:25:13.441 "name": null, 00:25:13.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.441 "is_configured": false, 00:25:13.441 "data_offset": 2048, 00:25:13.441 "data_size": 63488 00:25:13.441 }, 00:25:13.441 { 00:25:13.441 "name": "BaseBdev3", 00:25:13.441 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:13.441 "is_configured": true, 00:25:13.441 "data_offset": 2048, 00:25:13.441 "data_size": 63488 00:25:13.441 }, 00:25:13.441 { 00:25:13.441 "name": "BaseBdev4", 00:25:13.441 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:13.441 "is_configured": true, 00:25:13.441 "data_offset": 2048, 00:25:13.441 "data_size": 63488 00:25:13.441 } 00:25:13.441 ] 00:25:13.441 }' 00:25:13.442 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:13.442 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:13.442 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:13.699 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:13.700 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.700 13:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:13.700 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.700 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:13.957 [2024-07-25 13:25:24.374368] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:13.957 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:13.957 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:13.957 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:13.957 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:13.957 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:13.957 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:13.957 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.957 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.957 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.957 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.957 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.958 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.215 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.215 "name": "raid_bdev1", 00:25:14.215 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:14.215 "strip_size_kb": 0, 00:25:14.215 "state": "online", 00:25:14.215 "raid_level": "raid1", 00:25:14.215 "superblock": true, 00:25:14.215 "num_base_bdevs": 4, 00:25:14.215 "num_base_bdevs_discovered": 2, 00:25:14.215 "num_base_bdevs_operational": 2, 00:25:14.215 "base_bdevs_list": [ 00:25:14.215 { 00:25:14.215 "name": null, 00:25:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.215 "is_configured": false, 00:25:14.215 "data_offset": 2048, 00:25:14.215 "data_size": 63488 00:25:14.215 }, 00:25:14.215 { 00:25:14.215 "name": null, 00:25:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.215 "is_configured": false, 00:25:14.215 "data_offset": 2048, 00:25:14.215 "data_size": 63488 00:25:14.215 }, 00:25:14.215 { 00:25:14.215 "name": "BaseBdev3", 00:25:14.215 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:14.215 "is_configured": true, 00:25:14.215 "data_offset": 2048, 00:25:14.215 "data_size": 63488 00:25:14.215 }, 00:25:14.215 { 00:25:14.215 "name": "BaseBdev4", 00:25:14.215 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:14.215 "is_configured": true, 00:25:14.215 "data_offset": 2048, 00:25:14.215 "data_size": 63488 00:25:14.215 } 00:25:14.215 ] 00:25:14.215 }' 00:25:14.215 13:25:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.215 13:25:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.781 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:15.038 [2024-07-25 13:25:25.373020] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:15.038 [2024-07-25 13:25:25.373158] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:15.038 [2024-07-25 13:25:25.373174] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:15.038 [2024-07-25 13:25:25.373202] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:15.038 [2024-07-25 13:25:25.376947] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13c0600 00:25:15.038 [2024-07-25 13:25:25.378961] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:15.038 13:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:25:15.971 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.971 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:15.971 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:15.971 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:15.971 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:15.971 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.971 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.228 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:16.228 "name": "raid_bdev1", 00:25:16.228 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:16.228 "strip_size_kb": 0, 00:25:16.228 "state": "online", 00:25:16.228 "raid_level": "raid1", 00:25:16.228 "superblock": true, 00:25:16.228 "num_base_bdevs": 4, 00:25:16.228 "num_base_bdevs_discovered": 3, 00:25:16.228 "num_base_bdevs_operational": 3, 00:25:16.228 "process": { 00:25:16.228 "type": "rebuild", 00:25:16.228 "target": "spare", 00:25:16.228 "progress": { 00:25:16.228 "blocks": 24576, 00:25:16.228 "percent": 38 00:25:16.228 } 00:25:16.228 }, 00:25:16.228 "base_bdevs_list": [ 00:25:16.228 { 00:25:16.228 "name": "spare", 00:25:16.228 "uuid": "acf060b5-f704-59c9-8a35-85a219d3046c", 00:25:16.228 "is_configured": true, 00:25:16.228 "data_offset": 2048, 00:25:16.228 "data_size": 63488 00:25:16.228 }, 00:25:16.228 { 00:25:16.228 "name": null, 00:25:16.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.228 "is_configured": false, 00:25:16.228 "data_offset": 2048, 00:25:16.228 "data_size": 63488 00:25:16.228 }, 00:25:16.228 { 00:25:16.228 "name": "BaseBdev3", 00:25:16.228 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:16.228 "is_configured": true, 00:25:16.228 "data_offset": 2048, 00:25:16.228 "data_size": 63488 00:25:16.228 }, 00:25:16.228 { 00:25:16.228 "name": "BaseBdev4", 00:25:16.228 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:16.228 "is_configured": true, 00:25:16.228 "data_offset": 2048, 00:25:16.228 "data_size": 63488 00:25:16.228 } 00:25:16.228 ] 00:25:16.228 }' 00:25:16.228 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:16.228 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:16.228 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:16.486 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:16.486 13:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:16.486 [2024-07-25 13:25:26.931933] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:16.744 [2024-07-25 13:25:26.990563] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:16.744 [2024-07-25 13:25:26.990603] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.744 [2024-07-25 13:25:26.990618] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:16.744 [2024-07-25 13:25:26.990625] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:16.744 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:16.744 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:16.744 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:16.744 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:16.744 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:16.744 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:16.744 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:16.744 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:16.744 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:16.744 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:16.744 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.744 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.002 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:17.002 "name": "raid_bdev1", 00:25:17.002 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:17.002 "strip_size_kb": 0, 00:25:17.002 "state": "online", 00:25:17.002 "raid_level": "raid1", 00:25:17.002 "superblock": true, 00:25:17.002 "num_base_bdevs": 4, 00:25:17.002 "num_base_bdevs_discovered": 2, 00:25:17.002 "num_base_bdevs_operational": 2, 00:25:17.002 "base_bdevs_list": [ 00:25:17.002 { 00:25:17.002 "name": null, 00:25:17.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.002 "is_configured": false, 00:25:17.002 "data_offset": 2048, 00:25:17.002 "data_size": 63488 00:25:17.002 }, 00:25:17.002 { 00:25:17.002 "name": null, 00:25:17.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.002 "is_configured": false, 00:25:17.002 "data_offset": 2048, 00:25:17.002 "data_size": 63488 00:25:17.002 }, 00:25:17.002 { 00:25:17.002 "name": "BaseBdev3", 00:25:17.002 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:17.002 "is_configured": true, 00:25:17.002 "data_offset": 2048, 00:25:17.002 "data_size": 63488 00:25:17.002 }, 00:25:17.002 { 00:25:17.002 "name": "BaseBdev4", 00:25:17.002 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:17.002 "is_configured": true, 00:25:17.002 "data_offset": 2048, 00:25:17.002 "data_size": 63488 00:25:17.002 } 00:25:17.002 ] 00:25:17.002 }' 00:25:17.002 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:17.002 13:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.566 13:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:17.566 [2024-07-25 13:25:28.009000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:17.566 [2024-07-25 13:25:28.009047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.566 [2024-07-25 13:25:28.009066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13bfa20 00:25:17.566 [2024-07-25 13:25:28.009077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.566 [2024-07-25 13:25:28.009422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.566 [2024-07-25 13:25:28.009439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:17.566 [2024-07-25 13:25:28.009508] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:17.566 [2024-07-25 13:25:28.009519] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:17.566 [2024-07-25 13:25:28.009529] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:17.566 [2024-07-25 13:25:28.009545] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:17.566 [2024-07-25 13:25:28.013320] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13c2e30 00:25:17.566 spare 00:25:17.566 [2024-07-25 13:25:28.014707] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:17.566 13:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:25:18.949 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:18.949 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:18.949 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:18.949 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:18.949 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:18.949 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.949 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.949 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:18.949 "name": "raid_bdev1", 00:25:18.949 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:18.949 "strip_size_kb": 0, 00:25:18.949 "state": "online", 00:25:18.949 "raid_level": "raid1", 00:25:18.949 "superblock": true, 00:25:18.949 "num_base_bdevs": 4, 00:25:18.949 "num_base_bdevs_discovered": 3, 00:25:18.949 "num_base_bdevs_operational": 3, 00:25:18.949 "process": { 00:25:18.949 "type": "rebuild", 00:25:18.949 "target": "spare", 00:25:18.949 "progress": { 00:25:18.949 "blocks": 24576, 00:25:18.949 "percent": 38 00:25:18.949 } 00:25:18.949 }, 00:25:18.949 "base_bdevs_list": [ 00:25:18.949 { 00:25:18.949 "name": "spare", 00:25:18.949 "uuid": "acf060b5-f704-59c9-8a35-85a219d3046c", 00:25:18.949 "is_configured": true, 00:25:18.949 "data_offset": 2048, 00:25:18.949 "data_size": 63488 00:25:18.949 }, 00:25:18.949 { 00:25:18.949 "name": null, 00:25:18.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.949 "is_configured": false, 00:25:18.949 "data_offset": 2048, 00:25:18.949 "data_size": 63488 00:25:18.949 }, 00:25:18.949 { 00:25:18.949 "name": "BaseBdev3", 00:25:18.949 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:18.949 "is_configured": true, 00:25:18.949 "data_offset": 2048, 00:25:18.949 "data_size": 63488 00:25:18.949 }, 00:25:18.949 { 00:25:18.949 "name": "BaseBdev4", 00:25:18.949 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:18.949 "is_configured": true, 00:25:18.949 "data_offset": 2048, 00:25:18.949 "data_size": 63488 00:25:18.949 } 00:25:18.949 ] 00:25:18.949 }' 00:25:18.949 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:18.950 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:18.950 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:18.950 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:18.950 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:19.207 [2024-07-25 13:25:29.594366] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:19.207 [2024-07-25 13:25:29.626401] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:19.207 [2024-07-25 13:25:29.626443] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.207 [2024-07-25 13:25:29.626458] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:19.207 [2024-07-25 13:25:29.626465] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:19.207 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:19.207 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:19.207 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:19.207 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:19.207 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:19.207 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:19.207 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:19.207 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:19.207 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:19.207 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:19.207 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.207 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.465 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.465 "name": "raid_bdev1", 00:25:19.465 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:19.465 "strip_size_kb": 0, 00:25:19.465 "state": "online", 00:25:19.465 "raid_level": "raid1", 00:25:19.465 "superblock": true, 00:25:19.465 "num_base_bdevs": 4, 00:25:19.465 "num_base_bdevs_discovered": 2, 00:25:19.465 "num_base_bdevs_operational": 2, 00:25:19.465 "base_bdevs_list": [ 00:25:19.465 { 00:25:19.465 "name": null, 00:25:19.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.465 "is_configured": false, 00:25:19.465 "data_offset": 2048, 00:25:19.465 "data_size": 63488 00:25:19.465 }, 00:25:19.465 { 00:25:19.465 "name": null, 00:25:19.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.465 "is_configured": false, 00:25:19.465 "data_offset": 2048, 00:25:19.465 "data_size": 63488 00:25:19.465 }, 00:25:19.465 { 00:25:19.465 "name": "BaseBdev3", 00:25:19.465 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:19.465 "is_configured": true, 00:25:19.465 "data_offset": 2048, 00:25:19.465 "data_size": 63488 00:25:19.465 }, 00:25:19.465 { 00:25:19.465 "name": "BaseBdev4", 00:25:19.465 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:19.465 "is_configured": true, 00:25:19.465 "data_offset": 2048, 00:25:19.465 "data_size": 63488 00:25:19.465 } 00:25:19.465 ] 00:25:19.465 }' 00:25:19.465 13:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.465 13:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.030 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:20.030 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:20.030 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:20.030 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:20.030 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:20.030 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.030 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.288 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:20.288 "name": "raid_bdev1", 00:25:20.288 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:20.288 "strip_size_kb": 0, 00:25:20.288 "state": "online", 00:25:20.288 "raid_level": "raid1", 00:25:20.288 "superblock": true, 00:25:20.288 "num_base_bdevs": 4, 00:25:20.288 "num_base_bdevs_discovered": 2, 00:25:20.288 "num_base_bdevs_operational": 2, 00:25:20.288 "base_bdevs_list": [ 00:25:20.288 { 00:25:20.288 "name": null, 00:25:20.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.288 "is_configured": false, 00:25:20.288 "data_offset": 2048, 00:25:20.288 "data_size": 63488 00:25:20.288 }, 00:25:20.288 { 00:25:20.288 "name": null, 00:25:20.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.288 "is_configured": false, 00:25:20.288 "data_offset": 2048, 00:25:20.288 "data_size": 63488 00:25:20.288 }, 00:25:20.288 { 00:25:20.288 "name": "BaseBdev3", 00:25:20.288 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:20.288 "is_configured": true, 00:25:20.288 "data_offset": 2048, 00:25:20.288 "data_size": 63488 00:25:20.288 }, 00:25:20.288 { 00:25:20.288 "name": "BaseBdev4", 00:25:20.288 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:20.288 "is_configured": true, 00:25:20.288 "data_offset": 2048, 00:25:20.288 "data_size": 63488 00:25:20.288 } 00:25:20.288 ] 00:25:20.288 }' 00:25:20.288 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:20.288 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:20.288 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:20.288 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:20.288 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:20.546 13:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:20.823 [2024-07-25 13:25:31.190604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:20.823 [2024-07-25 13:25:31.190650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.823 [2024-07-25 13:25:31.190672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13c1710 00:25:20.823 [2024-07-25 13:25:31.190683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.823 [2024-07-25 13:25:31.190993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.823 [2024-07-25 13:25:31.191008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:20.823 [2024-07-25 13:25:31.191064] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:20.823 [2024-07-25 13:25:31.191074] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:20.823 [2024-07-25 13:25:31.191084] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:20.823 BaseBdev1 00:25:20.823 13:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:25:21.765 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:21.765 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:21.765 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:21.765 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:21.765 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:21.765 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:21.765 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:21.765 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:21.765 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:21.765 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:21.765 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.765 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.023 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.023 "name": "raid_bdev1", 00:25:22.023 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:22.023 "strip_size_kb": 0, 00:25:22.023 "state": "online", 00:25:22.023 "raid_level": "raid1", 00:25:22.023 "superblock": true, 00:25:22.023 "num_base_bdevs": 4, 00:25:22.023 "num_base_bdevs_discovered": 2, 00:25:22.023 "num_base_bdevs_operational": 2, 00:25:22.023 "base_bdevs_list": [ 00:25:22.023 { 00:25:22.023 "name": null, 00:25:22.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.023 "is_configured": false, 00:25:22.023 "data_offset": 2048, 00:25:22.023 "data_size": 63488 00:25:22.023 }, 00:25:22.023 { 00:25:22.023 "name": null, 00:25:22.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.023 "is_configured": false, 00:25:22.023 "data_offset": 2048, 00:25:22.023 "data_size": 63488 00:25:22.023 }, 00:25:22.023 { 00:25:22.023 "name": "BaseBdev3", 00:25:22.023 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:22.023 "is_configured": true, 00:25:22.023 "data_offset": 2048, 00:25:22.023 "data_size": 63488 00:25:22.023 }, 00:25:22.023 { 00:25:22.023 "name": "BaseBdev4", 00:25:22.023 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:22.023 "is_configured": true, 00:25:22.023 "data_offset": 2048, 00:25:22.023 "data_size": 63488 00:25:22.023 } 00:25:22.023 ] 00:25:22.023 }' 00:25:22.023 13:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.023 13:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.588 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:22.588 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:22.588 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:22.588 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:22.588 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:22.588 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.588 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.846 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:22.846 "name": "raid_bdev1", 00:25:22.846 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:22.846 "strip_size_kb": 0, 00:25:22.846 "state": "online", 00:25:22.846 "raid_level": "raid1", 00:25:22.846 "superblock": true, 00:25:22.846 "num_base_bdevs": 4, 00:25:22.846 "num_base_bdevs_discovered": 2, 00:25:22.846 "num_base_bdevs_operational": 2, 00:25:22.846 "base_bdevs_list": [ 00:25:22.846 { 00:25:22.846 "name": null, 00:25:22.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.846 "is_configured": false, 00:25:22.846 "data_offset": 2048, 00:25:22.846 "data_size": 63488 00:25:22.846 }, 00:25:22.846 { 00:25:22.846 "name": null, 00:25:22.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.846 "is_configured": false, 00:25:22.846 "data_offset": 2048, 00:25:22.846 "data_size": 63488 00:25:22.846 }, 00:25:22.846 { 00:25:22.846 "name": "BaseBdev3", 00:25:22.846 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:22.846 "is_configured": true, 00:25:22.846 "data_offset": 2048, 00:25:22.846 "data_size": 63488 00:25:22.846 }, 00:25:22.846 { 00:25:22.846 "name": "BaseBdev4", 00:25:22.846 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:22.846 "is_configured": true, 00:25:22.846 "data_offset": 2048, 00:25:22.846 "data_size": 63488 00:25:22.846 } 00:25:22.846 ] 00:25:22.846 }' 00:25:22.846 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:22.846 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:22.846 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:22.846 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:22.846 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:22.846 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:25:22.846 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:22.846 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:23.104 [2024-07-25 13:25:33.548832] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:23.104 [2024-07-25 13:25:33.548940] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:23.104 [2024-07-25 13:25:33.548955] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:23.104 request: 00:25:23.104 { 00:25:23.104 "base_bdev": "BaseBdev1", 00:25:23.104 "raid_bdev": "raid_bdev1", 00:25:23.104 "method": "bdev_raid_add_base_bdev", 00:25:23.104 "req_id": 1 00:25:23.104 } 00:25:23.104 Got JSON-RPC error response 00:25:23.104 response: 00:25:23.104 { 00:25:23.104 "code": -22, 00:25:23.104 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:23.104 } 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:23.104 13:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:25:24.477 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:24.477 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:24.477 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:24.477 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:24.478 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:24.478 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:24.478 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:24.478 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:24.478 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:24.478 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:24.478 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.478 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.478 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:24.478 "name": "raid_bdev1", 00:25:24.478 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:24.478 "strip_size_kb": 0, 00:25:24.478 "state": "online", 00:25:24.478 "raid_level": "raid1", 00:25:24.478 "superblock": true, 00:25:24.478 "num_base_bdevs": 4, 00:25:24.478 "num_base_bdevs_discovered": 2, 00:25:24.478 "num_base_bdevs_operational": 2, 00:25:24.478 "base_bdevs_list": [ 00:25:24.478 { 00:25:24.478 "name": null, 00:25:24.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.478 "is_configured": false, 00:25:24.478 "data_offset": 2048, 00:25:24.478 "data_size": 63488 00:25:24.478 }, 00:25:24.478 { 00:25:24.478 "name": null, 00:25:24.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.478 "is_configured": false, 00:25:24.478 "data_offset": 2048, 00:25:24.478 "data_size": 63488 00:25:24.478 }, 00:25:24.478 { 00:25:24.478 "name": "BaseBdev3", 00:25:24.478 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:24.478 "is_configured": true, 00:25:24.478 "data_offset": 2048, 00:25:24.478 "data_size": 63488 00:25:24.478 }, 00:25:24.478 { 00:25:24.478 "name": "BaseBdev4", 00:25:24.478 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:24.478 "is_configured": true, 00:25:24.478 "data_offset": 2048, 00:25:24.478 "data_size": 63488 00:25:24.478 } 00:25:24.478 ] 00:25:24.478 }' 00:25:24.478 13:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:24.478 13:25:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.042 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:25.042 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:25.042 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:25.042 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:25.042 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:25.042 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.042 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:25.301 "name": "raid_bdev1", 00:25:25.301 "uuid": "3ea1f627-758f-48c7-bd9a-108e3c95b7ee", 00:25:25.301 "strip_size_kb": 0, 00:25:25.301 "state": "online", 00:25:25.301 "raid_level": "raid1", 00:25:25.301 "superblock": true, 00:25:25.301 "num_base_bdevs": 4, 00:25:25.301 "num_base_bdevs_discovered": 2, 00:25:25.301 "num_base_bdevs_operational": 2, 00:25:25.301 "base_bdevs_list": [ 00:25:25.301 { 00:25:25.301 "name": null, 00:25:25.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.301 "is_configured": false, 00:25:25.301 "data_offset": 2048, 00:25:25.301 "data_size": 63488 00:25:25.301 }, 00:25:25.301 { 00:25:25.301 "name": null, 00:25:25.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.301 "is_configured": false, 00:25:25.301 "data_offset": 2048, 00:25:25.301 "data_size": 63488 00:25:25.301 }, 00:25:25.301 { 00:25:25.301 "name": "BaseBdev3", 00:25:25.301 "uuid": "924f86e2-b54b-5b9f-b265-987ab4a4ca6d", 00:25:25.301 "is_configured": true, 00:25:25.301 "data_offset": 2048, 00:25:25.301 "data_size": 63488 00:25:25.301 }, 00:25:25.301 { 00:25:25.301 "name": "BaseBdev4", 00:25:25.301 "uuid": "ece80e56-632c-54f6-b63c-cf84d884ce84", 00:25:25.301 "is_configured": true, 00:25:25.301 "data_offset": 2048, 00:25:25.301 "data_size": 63488 00:25:25.301 } 00:25:25.301 ] 00:25:25.301 }' 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 978646 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 978646 ']' 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 978646 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 978646 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 978646' 00:25:25.301 killing process with pid 978646 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 978646 00:25:25.301 Received shutdown signal, test time was about 60.000000 seconds 00:25:25.301 00:25:25.301 Latency(us) 00:25:25.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.301 =================================================================================================================== 00:25:25.301 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:25.301 [2024-07-25 13:25:35.736654] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:25.301 [2024-07-25 13:25:35.736736] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:25.301 [2024-07-25 13:25:35.736786] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:25.301 [2024-07-25 13:25:35.736797] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1458070 name raid_bdev1, state offline 00:25:25.301 13:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 978646 00:25:25.301 [2024-07-25 13:25:35.776990] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:25.560 13:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:25:25.560 00:25:25.560 real 0m35.613s 00:25:25.560 user 0m52.146s 00:25:25.560 sys 0m6.136s 00:25:25.560 13:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:25.560 13:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.560 ************************************ 00:25:25.560 END TEST raid_rebuild_test_sb 00:25:25.560 ************************************ 00:25:25.560 13:25:36 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:25:25.560 13:25:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:25.560 13:25:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:25.560 13:25:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:25.818 ************************************ 00:25:25.818 START TEST raid_rebuild_test_io 00:25:25.818 ************************************ 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=985130 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 985130 /var/tmp/spdk-raid.sock 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 985130 ']' 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:25.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:25.818 13:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:25.818 [2024-07-25 13:25:36.116068] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:25:25.818 [2024-07-25 13:25:36.116122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985130 ] 00:25:25.818 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:25.818 Zero copy mechanism will not be used. 00:25:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.818 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:25.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.818 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:25.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:25.819 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:25.819 [2024-07-25 13:25:36.246887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.077 [2024-07-25 13:25:36.333649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.077 [2024-07-25 13:25:36.390578] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:26.077 [2024-07-25 13:25:36.390607] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:26.642 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.642 13:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:25:26.642 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:26.642 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:26.901 BaseBdev1_malloc 00:25:26.901 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:27.158 [2024-07-25 13:25:37.450050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:27.158 [2024-07-25 13:25:37.450092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.158 [2024-07-25 13:25:37.450112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf095f0 00:25:27.158 [2024-07-25 13:25:37.450124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.158 [2024-07-25 13:25:37.451629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.158 [2024-07-25 13:25:37.451660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:27.158 BaseBdev1 00:25:27.158 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:27.158 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:27.416 BaseBdev2_malloc 00:25:27.416 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:27.675 [2024-07-25 13:25:37.915672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:27.675 [2024-07-25 13:25:37.915712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.675 [2024-07-25 13:25:37.915730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10acfd0 00:25:27.675 [2024-07-25 13:25:37.915741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.675 [2024-07-25 13:25:37.917137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.675 [2024-07-25 13:25:37.917175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:27.675 BaseBdev2 00:25:27.675 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:27.675 13:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:27.675 BaseBdev3_malloc 00:25:27.934 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:27.934 [2024-07-25 13:25:38.373086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:27.934 [2024-07-25 13:25:38.373126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.934 [2024-07-25 13:25:38.373152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a2da0 00:25:27.934 [2024-07-25 13:25:38.373163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.934 [2024-07-25 13:25:38.374501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.934 [2024-07-25 13:25:38.374528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:27.934 BaseBdev3 00:25:27.934 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:27.934 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:28.192 BaseBdev4_malloc 00:25:28.192 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:28.450 [2024-07-25 13:25:38.834518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:28.450 [2024-07-25 13:25:38.834558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.450 [2024-07-25 13:25:38.834576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf01290 00:25:28.450 [2024-07-25 13:25:38.834587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.450 [2024-07-25 13:25:38.835927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.450 [2024-07-25 13:25:38.835952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:28.450 BaseBdev4 00:25:28.450 13:25:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:28.707 spare_malloc 00:25:28.707 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:28.965 spare_delay 00:25:28.965 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:29.223 [2024-07-25 13:25:39.480482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:29.223 [2024-07-25 13:25:39.480518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.223 [2024-07-25 13:25:39.480537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf03eb0 00:25:29.223 [2024-07-25 13:25:39.480548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.223 [2024-07-25 13:25:39.481875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.223 [2024-07-25 13:25:39.481900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:29.223 spare 00:25:29.223 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:29.223 [2024-07-25 13:25:39.693058] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:29.223 [2024-07-25 13:25:39.694182] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:29.223 [2024-07-25 13:25:39.694231] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:29.223 [2024-07-25 13:25:39.694272] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:29.223 [2024-07-25 13:25:39.694343] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xf00e80 00:25:29.223 [2024-07-25 13:25:39.694353] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:29.223 [2024-07-25 13:25:39.694547] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf00d90 00:25:29.223 [2024-07-25 13:25:39.694680] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf00e80 00:25:29.223 [2024-07-25 13:25:39.694690] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf00e80 00:25:29.223 [2024-07-25 13:25:39.694791] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.223 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:29.223 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:29.223 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:29.223 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:29.223 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:29.223 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:29.223 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.480 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.480 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.480 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.480 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.480 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.480 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.480 "name": "raid_bdev1", 00:25:29.480 "uuid": "4ff59462-4bce-4d02-8b7b-fbc911fbb4e3", 00:25:29.480 "strip_size_kb": 0, 00:25:29.480 "state": "online", 00:25:29.480 "raid_level": "raid1", 00:25:29.480 "superblock": false, 00:25:29.480 "num_base_bdevs": 4, 00:25:29.480 "num_base_bdevs_discovered": 4, 00:25:29.480 "num_base_bdevs_operational": 4, 00:25:29.480 "base_bdevs_list": [ 00:25:29.480 { 00:25:29.480 "name": "BaseBdev1", 00:25:29.480 "uuid": "f5cda80b-4811-5c7c-8131-859d507b0aa1", 00:25:29.480 "is_configured": true, 00:25:29.480 "data_offset": 0, 00:25:29.480 "data_size": 65536 00:25:29.480 }, 00:25:29.480 { 00:25:29.480 "name": "BaseBdev2", 00:25:29.480 "uuid": "d8587399-1353-5518-89be-3dab0b7d1b33", 00:25:29.480 "is_configured": true, 00:25:29.480 "data_offset": 0, 00:25:29.480 "data_size": 65536 00:25:29.480 }, 00:25:29.480 { 00:25:29.480 "name": "BaseBdev3", 00:25:29.480 "uuid": "b8099b36-19ff-5fe0-91ba-be1eb88ed338", 00:25:29.480 "is_configured": true, 00:25:29.480 "data_offset": 0, 00:25:29.480 "data_size": 65536 00:25:29.480 }, 00:25:29.480 { 00:25:29.480 "name": "BaseBdev4", 00:25:29.480 "uuid": "7c499f33-3c92-5d01-a4d9-1e13bcf8da2c", 00:25:29.480 "is_configured": true, 00:25:29.480 "data_offset": 0, 00:25:29.480 "data_size": 65536 00:25:29.480 } 00:25:29.480 ] 00:25:29.480 }' 00:25:29.480 13:25:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.480 13:25:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:30.043 13:25:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:30.043 13:25:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:25:30.300 [2024-07-25 13:25:40.736059] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:30.300 13:25:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:25:30.300 13:25:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.300 13:25:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:30.557 13:25:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:25:30.557 13:25:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:25:30.557 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:30.557 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:30.815 [2024-07-25 13:25:41.106760] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf06410 00:25:30.815 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:30.815 Zero copy mechanism will not be used. 00:25:30.815 Running I/O for 60 seconds... 00:25:30.815 [2024-07-25 13:25:41.217984] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:30.815 [2024-07-25 13:25:41.225499] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xf06410 00:25:30.815 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:30.815 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:30.815 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:30.815 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:30.815 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:30.815 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:30.815 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:30.815 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:30.815 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:30.815 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:30.815 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.815 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.072 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:31.072 "name": "raid_bdev1", 00:25:31.072 "uuid": "4ff59462-4bce-4d02-8b7b-fbc911fbb4e3", 00:25:31.072 "strip_size_kb": 0, 00:25:31.072 "state": "online", 00:25:31.072 "raid_level": "raid1", 00:25:31.072 "superblock": false, 00:25:31.072 "num_base_bdevs": 4, 00:25:31.072 "num_base_bdevs_discovered": 3, 00:25:31.072 "num_base_bdevs_operational": 3, 00:25:31.072 "base_bdevs_list": [ 00:25:31.072 { 00:25:31.072 "name": null, 00:25:31.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.072 "is_configured": false, 00:25:31.072 "data_offset": 0, 00:25:31.072 "data_size": 65536 00:25:31.072 }, 00:25:31.072 { 00:25:31.072 "name": "BaseBdev2", 00:25:31.072 "uuid": "d8587399-1353-5518-89be-3dab0b7d1b33", 00:25:31.072 "is_configured": true, 00:25:31.072 "data_offset": 0, 00:25:31.072 "data_size": 65536 00:25:31.072 }, 00:25:31.072 { 00:25:31.072 "name": "BaseBdev3", 00:25:31.072 "uuid": "b8099b36-19ff-5fe0-91ba-be1eb88ed338", 00:25:31.072 "is_configured": true, 00:25:31.072 "data_offset": 0, 00:25:31.072 "data_size": 65536 00:25:31.072 }, 00:25:31.072 { 00:25:31.072 "name": "BaseBdev4", 00:25:31.072 "uuid": "7c499f33-3c92-5d01-a4d9-1e13bcf8da2c", 00:25:31.072 "is_configured": true, 00:25:31.072 "data_offset": 0, 00:25:31.072 "data_size": 65536 00:25:31.072 } 00:25:31.072 ] 00:25:31.072 }' 00:25:31.072 13:25:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:31.072 13:25:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:31.638 13:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:31.896 [2024-07-25 13:25:42.295206] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:31.896 13:25:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:31.896 [2024-07-25 13:25:42.378881] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10bb1f0 00:25:31.896 [2024-07-25 13:25:42.381275] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:32.154 [2024-07-25 13:25:42.499618] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:32.154 [2024-07-25 13:25:42.499885] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:32.411 [2024-07-25 13:25:42.720877] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:32.411 [2024-07-25 13:25:42.721146] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:32.669 [2024-07-25 13:25:43.071852] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:32.669 [2024-07-25 13:25:43.072110] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:32.927 [2024-07-25 13:25:43.217940] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:32.927 [2024-07-25 13:25:43.218504] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:32.927 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:32.927 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:32.927 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:32.927 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:32.927 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:32.927 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.927 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.184 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:33.184 "name": "raid_bdev1", 00:25:33.184 "uuid": "4ff59462-4bce-4d02-8b7b-fbc911fbb4e3", 00:25:33.184 "strip_size_kb": 0, 00:25:33.184 "state": "online", 00:25:33.184 "raid_level": "raid1", 00:25:33.184 "superblock": false, 00:25:33.184 "num_base_bdevs": 4, 00:25:33.184 "num_base_bdevs_discovered": 4, 00:25:33.184 "num_base_bdevs_operational": 4, 00:25:33.184 "process": { 00:25:33.184 "type": "rebuild", 00:25:33.184 "target": "spare", 00:25:33.184 "progress": { 00:25:33.184 "blocks": 12288, 00:25:33.184 "percent": 18 00:25:33.184 } 00:25:33.184 }, 00:25:33.184 "base_bdevs_list": [ 00:25:33.184 { 00:25:33.184 "name": "spare", 00:25:33.184 "uuid": "c4fa4bc9-a738-5036-9e4d-85e7b7c8fc8d", 00:25:33.184 "is_configured": true, 00:25:33.184 "data_offset": 0, 00:25:33.184 "data_size": 65536 00:25:33.184 }, 00:25:33.184 { 00:25:33.184 "name": "BaseBdev2", 00:25:33.184 "uuid": "d8587399-1353-5518-89be-3dab0b7d1b33", 00:25:33.184 "is_configured": true, 00:25:33.184 "data_offset": 0, 00:25:33.184 "data_size": 65536 00:25:33.184 }, 00:25:33.184 { 00:25:33.184 "name": "BaseBdev3", 00:25:33.184 "uuid": "b8099b36-19ff-5fe0-91ba-be1eb88ed338", 00:25:33.184 "is_configured": true, 00:25:33.184 "data_offset": 0, 00:25:33.184 "data_size": 65536 00:25:33.184 }, 00:25:33.184 { 00:25:33.184 "name": "BaseBdev4", 00:25:33.184 "uuid": "7c499f33-3c92-5d01-a4d9-1e13bcf8da2c", 00:25:33.184 "is_configured": true, 00:25:33.184 "data_offset": 0, 00:25:33.184 "data_size": 65536 00:25:33.184 } 00:25:33.184 ] 00:25:33.184 }' 00:25:33.184 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:33.184 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:33.184 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:33.441 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:33.441 13:25:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:33.441 [2024-07-25 13:25:43.723456] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:33.441 [2024-07-25 13:25:43.723613] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:33.441 [2024-07-25 13:25:43.899414] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:33.699 [2024-07-25 13:25:43.983671] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:33.699 [2024-07-25 13:25:44.001622] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.699 [2024-07-25 13:25:44.001648] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:33.699 [2024-07-25 13:25:44.001657] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:33.699 [2024-07-25 13:25:44.030455] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xf06410 00:25:33.699 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:33.699 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:33.699 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:33.699 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:33.699 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:33.699 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:33.699 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.699 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.699 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.699 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.699 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.699 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.957 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:33.957 "name": "raid_bdev1", 00:25:33.957 "uuid": "4ff59462-4bce-4d02-8b7b-fbc911fbb4e3", 00:25:33.957 "strip_size_kb": 0, 00:25:33.957 "state": "online", 00:25:33.957 "raid_level": "raid1", 00:25:33.957 "superblock": false, 00:25:33.957 "num_base_bdevs": 4, 00:25:33.957 "num_base_bdevs_discovered": 3, 00:25:33.957 "num_base_bdevs_operational": 3, 00:25:33.957 "base_bdevs_list": [ 00:25:33.957 { 00:25:33.957 "name": null, 00:25:33.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.957 "is_configured": false, 00:25:33.957 "data_offset": 0, 00:25:33.957 "data_size": 65536 00:25:33.957 }, 00:25:33.957 { 00:25:33.957 "name": "BaseBdev2", 00:25:33.957 "uuid": "d8587399-1353-5518-89be-3dab0b7d1b33", 00:25:33.957 "is_configured": true, 00:25:33.957 "data_offset": 0, 00:25:33.957 "data_size": 65536 00:25:33.957 }, 00:25:33.957 { 00:25:33.957 "name": "BaseBdev3", 00:25:33.957 "uuid": "b8099b36-19ff-5fe0-91ba-be1eb88ed338", 00:25:33.957 "is_configured": true, 00:25:33.957 "data_offset": 0, 00:25:33.957 "data_size": 65536 00:25:33.957 }, 00:25:33.957 { 00:25:33.957 "name": "BaseBdev4", 00:25:33.957 "uuid": "7c499f33-3c92-5d01-a4d9-1e13bcf8da2c", 00:25:33.957 "is_configured": true, 00:25:33.957 "data_offset": 0, 00:25:33.957 "data_size": 65536 00:25:33.957 } 00:25:33.957 ] 00:25:33.957 }' 00:25:33.957 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:33.957 13:25:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:34.521 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:34.521 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:34.521 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:34.521 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:34.521 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:34.521 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.521 13:25:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.778 13:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:34.778 "name": "raid_bdev1", 00:25:34.778 "uuid": "4ff59462-4bce-4d02-8b7b-fbc911fbb4e3", 00:25:34.778 "strip_size_kb": 0, 00:25:34.778 "state": "online", 00:25:34.778 "raid_level": "raid1", 00:25:34.778 "superblock": false, 00:25:34.778 "num_base_bdevs": 4, 00:25:34.778 "num_base_bdevs_discovered": 3, 00:25:34.778 "num_base_bdevs_operational": 3, 00:25:34.778 "base_bdevs_list": [ 00:25:34.778 { 00:25:34.778 "name": null, 00:25:34.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:34.778 "is_configured": false, 00:25:34.778 "data_offset": 0, 00:25:34.778 "data_size": 65536 00:25:34.778 }, 00:25:34.778 { 00:25:34.778 "name": "BaseBdev2", 00:25:34.778 "uuid": "d8587399-1353-5518-89be-3dab0b7d1b33", 00:25:34.778 "is_configured": true, 00:25:34.778 "data_offset": 0, 00:25:34.778 "data_size": 65536 00:25:34.778 }, 00:25:34.778 { 00:25:34.778 "name": "BaseBdev3", 00:25:34.778 "uuid": "b8099b36-19ff-5fe0-91ba-be1eb88ed338", 00:25:34.778 "is_configured": true, 00:25:34.778 "data_offset": 0, 00:25:34.778 "data_size": 65536 00:25:34.778 }, 00:25:34.778 { 00:25:34.778 "name": "BaseBdev4", 00:25:34.778 "uuid": "7c499f33-3c92-5d01-a4d9-1e13bcf8da2c", 00:25:34.778 "is_configured": true, 00:25:34.778 "data_offset": 0, 00:25:34.778 "data_size": 65536 00:25:34.778 } 00:25:34.778 ] 00:25:34.778 }' 00:25:34.778 13:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:34.778 13:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:34.778 13:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:34.778 13:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:34.778 13:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:35.038 [2024-07-25 13:25:45.463563] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:35.038 13:25:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:35.323 [2024-07-25 13:25:45.532320] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x109e180 00:25:35.323 [2024-07-25 13:25:45.533728] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:35.323 [2024-07-25 13:25:45.652633] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:35.323 [2024-07-25 13:25:45.653029] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:35.323 [2024-07-25 13:25:45.777423] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:35.323 [2024-07-25 13:25:45.777972] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:35.905 [2024-07-25 13:25:46.112326] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:36.163 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.163 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:36.163 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:36.163 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:36.163 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:36.163 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.163 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.163 [2024-07-25 13:25:46.608660] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:36.421 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:36.421 "name": "raid_bdev1", 00:25:36.421 "uuid": "4ff59462-4bce-4d02-8b7b-fbc911fbb4e3", 00:25:36.421 "strip_size_kb": 0, 00:25:36.421 "state": "online", 00:25:36.421 "raid_level": "raid1", 00:25:36.421 "superblock": false, 00:25:36.421 "num_base_bdevs": 4, 00:25:36.421 "num_base_bdevs_discovered": 4, 00:25:36.421 "num_base_bdevs_operational": 4, 00:25:36.421 "process": { 00:25:36.421 "type": "rebuild", 00:25:36.421 "target": "spare", 00:25:36.421 "progress": { 00:25:36.421 "blocks": 18432, 00:25:36.421 "percent": 28 00:25:36.421 } 00:25:36.421 }, 00:25:36.421 "base_bdevs_list": [ 00:25:36.421 { 00:25:36.421 "name": "spare", 00:25:36.421 "uuid": "c4fa4bc9-a738-5036-9e4d-85e7b7c8fc8d", 00:25:36.421 "is_configured": true, 00:25:36.421 "data_offset": 0, 00:25:36.421 "data_size": 65536 00:25:36.421 }, 00:25:36.421 { 00:25:36.421 "name": "BaseBdev2", 00:25:36.421 "uuid": "d8587399-1353-5518-89be-3dab0b7d1b33", 00:25:36.421 "is_configured": true, 00:25:36.421 "data_offset": 0, 00:25:36.421 "data_size": 65536 00:25:36.421 }, 00:25:36.421 { 00:25:36.421 "name": "BaseBdev3", 00:25:36.421 "uuid": "b8099b36-19ff-5fe0-91ba-be1eb88ed338", 00:25:36.421 "is_configured": true, 00:25:36.421 "data_offset": 0, 00:25:36.421 "data_size": 65536 00:25:36.421 }, 00:25:36.421 { 00:25:36.421 "name": "BaseBdev4", 00:25:36.421 "uuid": "7c499f33-3c92-5d01-a4d9-1e13bcf8da2c", 00:25:36.421 "is_configured": true, 00:25:36.421 "data_offset": 0, 00:25:36.421 "data_size": 65536 00:25:36.421 } 00:25:36.421 ] 00:25:36.421 }' 00:25:36.421 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:36.421 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:36.421 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:36.421 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:36.421 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:25:36.421 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:25:36.421 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:25:36.421 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:25:36.421 13:25:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:36.679 [2024-07-25 13:25:46.976260] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:36.679 [2024-07-25 13:25:47.050296] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:36.679 [2024-07-25 13:25:47.122487] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xf06410 00:25:36.679 [2024-07-25 13:25:47.122510] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x109e180 00:25:36.679 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:25:36.679 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:25:36.679 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.679 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:36.679 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:36.679 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:36.679 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:36.679 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.679 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.936 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:36.936 "name": "raid_bdev1", 00:25:36.936 "uuid": "4ff59462-4bce-4d02-8b7b-fbc911fbb4e3", 00:25:36.936 "strip_size_kb": 0, 00:25:36.936 "state": "online", 00:25:36.936 "raid_level": "raid1", 00:25:36.936 "superblock": false, 00:25:36.936 "num_base_bdevs": 4, 00:25:36.936 "num_base_bdevs_discovered": 3, 00:25:36.936 "num_base_bdevs_operational": 3, 00:25:36.936 "process": { 00:25:36.936 "type": "rebuild", 00:25:36.936 "target": "spare", 00:25:36.936 "progress": { 00:25:36.936 "blocks": 28672, 00:25:36.936 "percent": 43 00:25:36.936 } 00:25:36.936 }, 00:25:36.936 "base_bdevs_list": [ 00:25:36.936 { 00:25:36.936 "name": "spare", 00:25:36.936 "uuid": "c4fa4bc9-a738-5036-9e4d-85e7b7c8fc8d", 00:25:36.936 "is_configured": true, 00:25:36.936 "data_offset": 0, 00:25:36.936 "data_size": 65536 00:25:36.936 }, 00:25:36.936 { 00:25:36.936 "name": null, 00:25:36.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.936 "is_configured": false, 00:25:36.936 "data_offset": 0, 00:25:36.936 "data_size": 65536 00:25:36.936 }, 00:25:36.936 { 00:25:36.936 "name": "BaseBdev3", 00:25:36.936 "uuid": "b8099b36-19ff-5fe0-91ba-be1eb88ed338", 00:25:36.936 "is_configured": true, 00:25:36.936 "data_offset": 0, 00:25:36.936 "data_size": 65536 00:25:36.936 }, 00:25:36.936 { 00:25:36.936 "name": "BaseBdev4", 00:25:36.936 "uuid": "7c499f33-3c92-5d01-a4d9-1e13bcf8da2c", 00:25:36.936 "is_configured": true, 00:25:36.936 "data_offset": 0, 00:25:36.936 "data_size": 65536 00:25:36.936 } 00:25:36.936 ] 00:25:36.936 }' 00:25:36.936 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:37.193 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:37.193 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:37.193 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:37.193 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=902 00:25:37.193 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:37.193 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:37.193 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:37.193 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:37.193 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:37.193 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:37.193 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.193 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.193 [2024-07-25 13:25:47.589582] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:37.451 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:37.451 "name": "raid_bdev1", 00:25:37.451 "uuid": "4ff59462-4bce-4d02-8b7b-fbc911fbb4e3", 00:25:37.451 "strip_size_kb": 0, 00:25:37.451 "state": "online", 00:25:37.451 "raid_level": "raid1", 00:25:37.451 "superblock": false, 00:25:37.451 "num_base_bdevs": 4, 00:25:37.451 "num_base_bdevs_discovered": 3, 00:25:37.451 "num_base_bdevs_operational": 3, 00:25:37.451 "process": { 00:25:37.451 "type": "rebuild", 00:25:37.451 "target": "spare", 00:25:37.451 "progress": { 00:25:37.451 "blocks": 32768, 00:25:37.451 "percent": 50 00:25:37.451 } 00:25:37.451 }, 00:25:37.451 "base_bdevs_list": [ 00:25:37.451 { 00:25:37.451 "name": "spare", 00:25:37.451 "uuid": "c4fa4bc9-a738-5036-9e4d-85e7b7c8fc8d", 00:25:37.451 "is_configured": true, 00:25:37.451 "data_offset": 0, 00:25:37.451 "data_size": 65536 00:25:37.451 }, 00:25:37.451 { 00:25:37.451 "name": null, 00:25:37.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.451 "is_configured": false, 00:25:37.451 "data_offset": 0, 00:25:37.451 "data_size": 65536 00:25:37.451 }, 00:25:37.451 { 00:25:37.451 "name": "BaseBdev3", 00:25:37.451 "uuid": "b8099b36-19ff-5fe0-91ba-be1eb88ed338", 00:25:37.451 "is_configured": true, 00:25:37.451 "data_offset": 0, 00:25:37.451 "data_size": 65536 00:25:37.451 }, 00:25:37.451 { 00:25:37.451 "name": "BaseBdev4", 00:25:37.451 "uuid": "7c499f33-3c92-5d01-a4d9-1e13bcf8da2c", 00:25:37.451 "is_configured": true, 00:25:37.451 "data_offset": 0, 00:25:37.451 "data_size": 65536 00:25:37.451 } 00:25:37.451 ] 00:25:37.451 }' 00:25:37.451 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:37.451 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:37.451 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:37.451 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:37.451 13:25:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:37.451 [2024-07-25 13:25:47.937127] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:25:38.014 [2024-07-25 13:25:48.384963] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:25:38.579 13:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:38.579 13:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.579 13:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:38.579 13:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:38.579 13:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:38.579 13:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:38.579 13:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.579 13:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.579 [2024-07-25 13:25:48.930253] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:25:38.579 13:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:38.579 "name": "raid_bdev1", 00:25:38.579 "uuid": "4ff59462-4bce-4d02-8b7b-fbc911fbb4e3", 00:25:38.579 "strip_size_kb": 0, 00:25:38.579 "state": "online", 00:25:38.579 "raid_level": "raid1", 00:25:38.579 "superblock": false, 00:25:38.579 "num_base_bdevs": 4, 00:25:38.579 "num_base_bdevs_discovered": 3, 00:25:38.579 "num_base_bdevs_operational": 3, 00:25:38.579 "process": { 00:25:38.579 "type": "rebuild", 00:25:38.579 "target": "spare", 00:25:38.579 "progress": { 00:25:38.579 "blocks": 57344, 00:25:38.579 "percent": 87 00:25:38.579 } 00:25:38.579 }, 00:25:38.579 "base_bdevs_list": [ 00:25:38.579 { 00:25:38.579 "name": "spare", 00:25:38.579 "uuid": "c4fa4bc9-a738-5036-9e4d-85e7b7c8fc8d", 00:25:38.579 "is_configured": true, 00:25:38.579 "data_offset": 0, 00:25:38.579 "data_size": 65536 00:25:38.579 }, 00:25:38.579 { 00:25:38.579 "name": null, 00:25:38.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.579 "is_configured": false, 00:25:38.579 "data_offset": 0, 00:25:38.579 "data_size": 65536 00:25:38.579 }, 00:25:38.579 { 00:25:38.579 "name": "BaseBdev3", 00:25:38.579 "uuid": "b8099b36-19ff-5fe0-91ba-be1eb88ed338", 00:25:38.579 "is_configured": true, 00:25:38.579 "data_offset": 0, 00:25:38.579 "data_size": 65536 00:25:38.579 }, 00:25:38.579 { 00:25:38.579 "name": "BaseBdev4", 00:25:38.579 "uuid": "7c499f33-3c92-5d01-a4d9-1e13bcf8da2c", 00:25:38.579 "is_configured": true, 00:25:38.579 "data_offset": 0, 00:25:38.579 "data_size": 65536 00:25:38.579 } 00:25:38.579 ] 00:25:38.579 }' 00:25:38.579 13:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:38.579 [2024-07-25 13:25:49.032319] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:25:38.579 [2024-07-25 13:25:49.032496] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:25:38.837 13:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:38.837 13:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:38.837 13:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:38.837 13:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:39.095 [2024-07-25 13:25:49.396055] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:39.095 [2024-07-25 13:25:49.496319] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:39.095 [2024-07-25 13:25:49.506052] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:39.662 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:39.662 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.662 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:39.662 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:39.662 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:39.662 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:39.662 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.662 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.919 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:39.920 "name": "raid_bdev1", 00:25:39.920 "uuid": "4ff59462-4bce-4d02-8b7b-fbc911fbb4e3", 00:25:39.920 "strip_size_kb": 0, 00:25:39.920 "state": "online", 00:25:39.920 "raid_level": "raid1", 00:25:39.920 "superblock": false, 00:25:39.920 "num_base_bdevs": 4, 00:25:39.920 "num_base_bdevs_discovered": 3, 00:25:39.920 "num_base_bdevs_operational": 3, 00:25:39.920 "base_bdevs_list": [ 00:25:39.920 { 00:25:39.920 "name": "spare", 00:25:39.920 "uuid": "c4fa4bc9-a738-5036-9e4d-85e7b7c8fc8d", 00:25:39.920 "is_configured": true, 00:25:39.920 "data_offset": 0, 00:25:39.920 "data_size": 65536 00:25:39.920 }, 00:25:39.920 { 00:25:39.920 "name": null, 00:25:39.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.920 "is_configured": false, 00:25:39.920 "data_offset": 0, 00:25:39.920 "data_size": 65536 00:25:39.920 }, 00:25:39.920 { 00:25:39.920 "name": "BaseBdev3", 00:25:39.920 "uuid": "b8099b36-19ff-5fe0-91ba-be1eb88ed338", 00:25:39.920 "is_configured": true, 00:25:39.920 "data_offset": 0, 00:25:39.920 "data_size": 65536 00:25:39.920 }, 00:25:39.920 { 00:25:39.920 "name": "BaseBdev4", 00:25:39.920 "uuid": "7c499f33-3c92-5d01-a4d9-1e13bcf8da2c", 00:25:39.920 "is_configured": true, 00:25:39.920 "data_offset": 0, 00:25:39.920 "data_size": 65536 00:25:39.920 } 00:25:39.920 ] 00:25:39.920 }' 00:25:39.920 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:39.920 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:39.920 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:40.178 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:40.178 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:25:40.178 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:40.178 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:40.178 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:40.178 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:40.178 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:40.178 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.178 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.436 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:40.436 "name": "raid_bdev1", 00:25:40.436 "uuid": "4ff59462-4bce-4d02-8b7b-fbc911fbb4e3", 00:25:40.436 "strip_size_kb": 0, 00:25:40.436 "state": "online", 00:25:40.436 "raid_level": "raid1", 00:25:40.436 "superblock": false, 00:25:40.436 "num_base_bdevs": 4, 00:25:40.436 "num_base_bdevs_discovered": 3, 00:25:40.436 "num_base_bdevs_operational": 3, 00:25:40.436 "base_bdevs_list": [ 00:25:40.436 { 00:25:40.436 "name": "spare", 00:25:40.436 "uuid": "c4fa4bc9-a738-5036-9e4d-85e7b7c8fc8d", 00:25:40.436 "is_configured": true, 00:25:40.436 "data_offset": 0, 00:25:40.436 "data_size": 65536 00:25:40.436 }, 00:25:40.436 { 00:25:40.436 "name": null, 00:25:40.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.436 "is_configured": false, 00:25:40.436 "data_offset": 0, 00:25:40.436 "data_size": 65536 00:25:40.436 }, 00:25:40.436 { 00:25:40.436 "name": "BaseBdev3", 00:25:40.436 "uuid": "b8099b36-19ff-5fe0-91ba-be1eb88ed338", 00:25:40.436 "is_configured": true, 00:25:40.436 "data_offset": 0, 00:25:40.436 "data_size": 65536 00:25:40.436 }, 00:25:40.436 { 00:25:40.436 "name": "BaseBdev4", 00:25:40.436 "uuid": "7c499f33-3c92-5d01-a4d9-1e13bcf8da2c", 00:25:40.436 "is_configured": true, 00:25:40.436 "data_offset": 0, 00:25:40.436 "data_size": 65536 00:25:40.436 } 00:25:40.436 ] 00:25:40.436 }' 00:25:40.436 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:40.436 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:40.436 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:40.436 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:40.436 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:40.436 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:40.436 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:40.436 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:40.436 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:40.437 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:40.437 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:40.437 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:40.437 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:40.437 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:40.437 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.437 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.694 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:40.694 "name": "raid_bdev1", 00:25:40.694 "uuid": "4ff59462-4bce-4d02-8b7b-fbc911fbb4e3", 00:25:40.694 "strip_size_kb": 0, 00:25:40.694 "state": "online", 00:25:40.694 "raid_level": "raid1", 00:25:40.694 "superblock": false, 00:25:40.694 "num_base_bdevs": 4, 00:25:40.694 "num_base_bdevs_discovered": 3, 00:25:40.694 "num_base_bdevs_operational": 3, 00:25:40.694 "base_bdevs_list": [ 00:25:40.694 { 00:25:40.694 "name": "spare", 00:25:40.694 "uuid": "c4fa4bc9-a738-5036-9e4d-85e7b7c8fc8d", 00:25:40.694 "is_configured": true, 00:25:40.694 "data_offset": 0, 00:25:40.694 "data_size": 65536 00:25:40.694 }, 00:25:40.694 { 00:25:40.694 "name": null, 00:25:40.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.694 "is_configured": false, 00:25:40.694 "data_offset": 0, 00:25:40.694 "data_size": 65536 00:25:40.694 }, 00:25:40.694 { 00:25:40.694 "name": "BaseBdev3", 00:25:40.695 "uuid": "b8099b36-19ff-5fe0-91ba-be1eb88ed338", 00:25:40.695 "is_configured": true, 00:25:40.695 "data_offset": 0, 00:25:40.695 "data_size": 65536 00:25:40.695 }, 00:25:40.695 { 00:25:40.695 "name": "BaseBdev4", 00:25:40.695 "uuid": "7c499f33-3c92-5d01-a4d9-1e13bcf8da2c", 00:25:40.695 "is_configured": true, 00:25:40.695 "data_offset": 0, 00:25:40.695 "data_size": 65536 00:25:40.695 } 00:25:40.695 ] 00:25:40.695 }' 00:25:40.695 13:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:40.695 13:25:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:41.260 13:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:41.518 [2024-07-25 13:25:51.756872] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:41.518 [2024-07-25 13:25:51.756900] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:41.518 00:25:41.518 Latency(us) 00:25:41.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.518 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:41.518 raid_bdev1 : 10.68 101.57 304.72 0.00 0.00 13852.19 271.97 120795.96 00:25:41.518 =================================================================================================================== 00:25:41.518 Total : 101.57 304.72 0.00 0.00 13852.19 271.97 120795.96 00:25:41.518 [2024-07-25 13:25:51.820671] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.518 [2024-07-25 13:25:51.820698] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:41.518 [2024-07-25 13:25:51.820783] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:41.518 [2024-07-25 13:25:51.820794] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf00e80 name raid_bdev1, state offline 00:25:41.518 0 00:25:41.518 13:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.518 13:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:41.776 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:42.035 /dev/nbd0 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:42.035 1+0 records in 00:25:42.035 1+0 records out 00:25:42.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285782 s, 14.3 MB/s 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # continue 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:42.035 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:25:42.293 /dev/nbd1 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:42.293 1+0 records in 00:25:42.293 1+0 records out 00:25:42.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285397 s, 14.4 MB/s 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:42.293 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:42.551 13:25:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:25:42.809 /dev/nbd1 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:42.809 1+0 records in 00:25:42.809 1+0 records out 00:25:42.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028523 s, 14.4 MB/s 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:42.809 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:43.066 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:43.066 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:43.066 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:43.067 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:43.067 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:43.067 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:43.067 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:43.067 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:43.067 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:43.067 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:43.067 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:43.067 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:43.067 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:43.067 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:43.067 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 985130 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 985130 ']' 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 985130 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 985130 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 985130' 00:25:43.324 killing process with pid 985130 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 985130 00:25:43.324 Received shutdown signal, test time was about 12.647030 seconds 00:25:43.324 00:25:43.324 Latency(us) 00:25:43.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.324 =================================================================================================================== 00:25:43.324 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:43.324 [2024-07-25 13:25:53.786767] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:43.324 13:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 985130 00:25:43.582 [2024-07-25 13:25:53.821130] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:43.582 13:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:25:43.582 00:25:43.582 real 0m17.965s 00:25:43.582 user 0m27.795s 00:25:43.582 sys 0m3.212s 00:25:43.582 13:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:43.582 13:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:43.582 ************************************ 00:25:43.582 END TEST raid_rebuild_test_io 00:25:43.582 ************************************ 00:25:43.582 13:25:54 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:25:43.582 13:25:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:43.582 13:25:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:43.582 13:25:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:43.840 ************************************ 00:25:43.840 START TEST raid_rebuild_test_sb_io 00:25:43.840 ************************************ 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:25:43.840 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=988292 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 988292 /var/tmp/spdk-raid.sock 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 988292 ']' 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:43.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:43.841 13:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:43.841 [2024-07-25 13:25:54.159006] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:25:43.841 [2024-07-25 13:25:54.159061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988292 ] 00:25:43.841 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:43.841 Zero copy mechanism will not be used. 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:43.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.841 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:43.841 [2024-07-25 13:25:54.291106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.099 [2024-07-25 13:25:54.377680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.099 [2024-07-25 13:25:54.441743] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:44.099 [2024-07-25 13:25:54.441772] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:44.665 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:44.665 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:25:44.665 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:44.665 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:44.923 BaseBdev1_malloc 00:25:44.923 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:45.180 [2024-07-25 13:25:55.506001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:45.180 [2024-07-25 13:25:55.506044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.180 [2024-07-25 13:25:55.506064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa085f0 00:25:45.180 [2024-07-25 13:25:55.506076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.180 [2024-07-25 13:25:55.507604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.180 [2024-07-25 13:25:55.507631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:45.180 BaseBdev1 00:25:45.180 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:45.180 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:45.438 BaseBdev2_malloc 00:25:45.438 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:45.704 [2024-07-25 13:25:55.951697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:45.704 [2024-07-25 13:25:55.951737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.704 [2024-07-25 13:25:55.951755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbabfd0 00:25:45.704 [2024-07-25 13:25:55.951766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.704 [2024-07-25 13:25:55.953184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.704 [2024-07-25 13:25:55.953210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:45.704 BaseBdev2 00:25:45.704 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:45.704 13:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:45.704 BaseBdev3_malloc 00:25:45.962 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:45.962 [2024-07-25 13:25:56.409117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:45.962 [2024-07-25 13:25:56.409163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.962 [2024-07-25 13:25:56.409181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xba1da0 00:25:45.962 [2024-07-25 13:25:56.409192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.962 [2024-07-25 13:25:56.410544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.962 [2024-07-25 13:25:56.410570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:45.962 BaseBdev3 00:25:45.962 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:45.962 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:46.219 BaseBdev4_malloc 00:25:46.219 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:46.477 [2024-07-25 13:25:56.866622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:46.477 [2024-07-25 13:25:56.866663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.477 [2024-07-25 13:25:56.866680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa00290 00:25:46.477 [2024-07-25 13:25:56.866691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.477 [2024-07-25 13:25:56.868035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.477 [2024-07-25 13:25:56.868059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:46.477 BaseBdev4 00:25:46.477 13:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:46.735 spare_malloc 00:25:46.735 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:46.992 spare_delay 00:25:46.992 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:47.250 [2024-07-25 13:25:57.536569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:47.250 [2024-07-25 13:25:57.536611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.250 [2024-07-25 13:25:57.536630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa02eb0 00:25:47.250 [2024-07-25 13:25:57.536642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.250 [2024-07-25 13:25:57.538028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.250 [2024-07-25 13:25:57.538055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:47.250 spare 00:25:47.250 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:47.509 [2024-07-25 13:25:57.761196] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:47.509 [2024-07-25 13:25:57.762368] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:47.509 [2024-07-25 13:25:57.762418] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:47.509 [2024-07-25 13:25:57.762460] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:47.509 [2024-07-25 13:25:57.762626] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x9ffe80 00:25:47.509 [2024-07-25 13:25:57.762636] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:47.509 [2024-07-25 13:25:57.762821] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9ffd90 00:25:47.509 [2024-07-25 13:25:57.762955] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9ffe80 00:25:47.509 [2024-07-25 13:25:57.762965] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9ffe80 00:25:47.509 [2024-07-25 13:25:57.763063] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.509 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:47.509 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:47.509 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:47.509 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:47.509 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:47.509 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:47.509 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:47.509 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:47.509 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:47.509 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:47.509 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.509 13:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.769 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:47.769 "name": "raid_bdev1", 00:25:47.769 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:25:47.769 "strip_size_kb": 0, 00:25:47.769 "state": "online", 00:25:47.769 "raid_level": "raid1", 00:25:47.769 "superblock": true, 00:25:47.769 "num_base_bdevs": 4, 00:25:47.769 "num_base_bdevs_discovered": 4, 00:25:47.769 "num_base_bdevs_operational": 4, 00:25:47.769 "base_bdevs_list": [ 00:25:47.769 { 00:25:47.769 "name": "BaseBdev1", 00:25:47.769 "uuid": "e44b1394-dd7d-504f-a485-385ed2bed7bb", 00:25:47.769 "is_configured": true, 00:25:47.769 "data_offset": 2048, 00:25:47.769 "data_size": 63488 00:25:47.769 }, 00:25:47.769 { 00:25:47.769 "name": "BaseBdev2", 00:25:47.769 "uuid": "9fe53585-62f8-5b1d-a652-8d2cc22fcb59", 00:25:47.769 "is_configured": true, 00:25:47.769 "data_offset": 2048, 00:25:47.769 "data_size": 63488 00:25:47.769 }, 00:25:47.769 { 00:25:47.769 "name": "BaseBdev3", 00:25:47.769 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:25:47.769 "is_configured": true, 00:25:47.769 "data_offset": 2048, 00:25:47.769 "data_size": 63488 00:25:47.769 }, 00:25:47.769 { 00:25:47.769 "name": "BaseBdev4", 00:25:47.769 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:25:47.769 "is_configured": true, 00:25:47.769 "data_offset": 2048, 00:25:47.769 "data_size": 63488 00:25:47.769 } 00:25:47.769 ] 00:25:47.769 }' 00:25:47.769 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:47.769 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:48.338 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:48.338 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:25:48.338 [2024-07-25 13:25:58.800490] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:48.338 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:25:48.338 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.597 13:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:48.597 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:25:48.597 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:25:48.597 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:48.597 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:48.857 [2024-07-25 13:25:59.151018] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa05410 00:25:48.857 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:48.857 Zero copy mechanism will not be used. 00:25:48.857 Running I/O for 60 seconds... 00:25:48.857 [2024-07-25 13:25:59.258028] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:48.857 [2024-07-25 13:25:59.265587] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xa05410 00:25:48.857 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:48.857 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:48.857 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:48.857 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:48.857 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:48.857 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:48.857 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.857 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.857 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.857 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.857 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.857 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.150 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:49.150 "name": "raid_bdev1", 00:25:49.150 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:25:49.150 "strip_size_kb": 0, 00:25:49.150 "state": "online", 00:25:49.150 "raid_level": "raid1", 00:25:49.150 "superblock": true, 00:25:49.150 "num_base_bdevs": 4, 00:25:49.150 "num_base_bdevs_discovered": 3, 00:25:49.150 "num_base_bdevs_operational": 3, 00:25:49.150 "base_bdevs_list": [ 00:25:49.150 { 00:25:49.150 "name": null, 00:25:49.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.150 "is_configured": false, 00:25:49.150 "data_offset": 2048, 00:25:49.150 "data_size": 63488 00:25:49.150 }, 00:25:49.150 { 00:25:49.150 "name": "BaseBdev2", 00:25:49.150 "uuid": "9fe53585-62f8-5b1d-a652-8d2cc22fcb59", 00:25:49.150 "is_configured": true, 00:25:49.150 "data_offset": 2048, 00:25:49.150 "data_size": 63488 00:25:49.150 }, 00:25:49.150 { 00:25:49.150 "name": "BaseBdev3", 00:25:49.150 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:25:49.150 "is_configured": true, 00:25:49.150 "data_offset": 2048, 00:25:49.150 "data_size": 63488 00:25:49.150 }, 00:25:49.150 { 00:25:49.150 "name": "BaseBdev4", 00:25:49.150 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:25:49.150 "is_configured": true, 00:25:49.150 "data_offset": 2048, 00:25:49.150 "data_size": 63488 00:25:49.150 } 00:25:49.150 ] 00:25:49.150 }' 00:25:49.150 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:49.150 13:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:49.719 13:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:49.979 [2024-07-25 13:26:00.335911] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:49.979 13:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:49.979 [2024-07-25 13:26:00.395385] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbaf070 00:25:49.979 [2024-07-25 13:26:00.397581] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:50.238 [2024-07-25 13:26:00.533982] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:50.238 [2024-07-25 13:26:00.534280] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:50.238 [2024-07-25 13:26:00.651616] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:50.238 [2024-07-25 13:26:00.652181] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:51.178 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:51.178 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:51.178 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:51.178 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:51.178 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:51.178 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.178 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.178 [2024-07-25 13:26:01.400176] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:51.178 [2024-07-25 13:26:01.401274] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:51.178 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:51.178 "name": "raid_bdev1", 00:25:51.178 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:25:51.178 "strip_size_kb": 0, 00:25:51.178 "state": "online", 00:25:51.178 "raid_level": "raid1", 00:25:51.178 "superblock": true, 00:25:51.178 "num_base_bdevs": 4, 00:25:51.178 "num_base_bdevs_discovered": 4, 00:25:51.178 "num_base_bdevs_operational": 4, 00:25:51.178 "process": { 00:25:51.178 "type": "rebuild", 00:25:51.178 "target": "spare", 00:25:51.178 "progress": { 00:25:51.178 "blocks": 14336, 00:25:51.178 "percent": 22 00:25:51.178 } 00:25:51.178 }, 00:25:51.178 "base_bdevs_list": [ 00:25:51.178 { 00:25:51.178 "name": "spare", 00:25:51.178 "uuid": "1326d8aa-a9d9-589f-b514-5b4058d9331e", 00:25:51.178 "is_configured": true, 00:25:51.178 "data_offset": 2048, 00:25:51.178 "data_size": 63488 00:25:51.178 }, 00:25:51.178 { 00:25:51.178 "name": "BaseBdev2", 00:25:51.178 "uuid": "9fe53585-62f8-5b1d-a652-8d2cc22fcb59", 00:25:51.178 "is_configured": true, 00:25:51.178 "data_offset": 2048, 00:25:51.178 "data_size": 63488 00:25:51.178 }, 00:25:51.178 { 00:25:51.178 "name": "BaseBdev3", 00:25:51.178 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:25:51.178 "is_configured": true, 00:25:51.178 "data_offset": 2048, 00:25:51.178 "data_size": 63488 00:25:51.178 }, 00:25:51.178 { 00:25:51.178 "name": "BaseBdev4", 00:25:51.178 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:25:51.178 "is_configured": true, 00:25:51.178 "data_offset": 2048, 00:25:51.178 "data_size": 63488 00:25:51.178 } 00:25:51.178 ] 00:25:51.178 }' 00:25:51.178 [2024-07-25 13:26:01.622275] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:51.178 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:51.437 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:51.437 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:51.437 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:51.437 13:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:51.437 [2024-07-25 13:26:01.880673] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:51.437 [2024-07-25 13:26:01.920808] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:51.697 [2024-07-25 13:26:02.031787] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:51.697 [2024-07-25 13:26:02.052500] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.697 [2024-07-25 13:26:02.052541] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:51.697 [2024-07-25 13:26:02.052551] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:51.697 [2024-07-25 13:26:02.065038] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xa05410 00:25:51.697 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:51.697 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:51.697 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:51.697 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:51.697 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:51.697 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:51.697 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:51.697 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:51.697 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:51.697 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:51.697 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.697 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.956 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:51.956 "name": "raid_bdev1", 00:25:51.956 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:25:51.956 "strip_size_kb": 0, 00:25:51.956 "state": "online", 00:25:51.956 "raid_level": "raid1", 00:25:51.956 "superblock": true, 00:25:51.956 "num_base_bdevs": 4, 00:25:51.956 "num_base_bdevs_discovered": 3, 00:25:51.956 "num_base_bdevs_operational": 3, 00:25:51.956 "base_bdevs_list": [ 00:25:51.956 { 00:25:51.956 "name": null, 00:25:51.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.956 "is_configured": false, 00:25:51.956 "data_offset": 2048, 00:25:51.956 "data_size": 63488 00:25:51.956 }, 00:25:51.956 { 00:25:51.956 "name": "BaseBdev2", 00:25:51.956 "uuid": "9fe53585-62f8-5b1d-a652-8d2cc22fcb59", 00:25:51.956 "is_configured": true, 00:25:51.956 "data_offset": 2048, 00:25:51.956 "data_size": 63488 00:25:51.956 }, 00:25:51.956 { 00:25:51.956 "name": "BaseBdev3", 00:25:51.956 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:25:51.956 "is_configured": true, 00:25:51.956 "data_offset": 2048, 00:25:51.956 "data_size": 63488 00:25:51.956 }, 00:25:51.956 { 00:25:51.956 "name": "BaseBdev4", 00:25:51.956 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:25:51.956 "is_configured": true, 00:25:51.957 "data_offset": 2048, 00:25:51.957 "data_size": 63488 00:25:51.957 } 00:25:51.957 ] 00:25:51.957 }' 00:25:51.957 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:51.957 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:52.526 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:52.526 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:52.526 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:52.526 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:52.526 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:52.526 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.526 13:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.786 13:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:52.786 "name": "raid_bdev1", 00:25:52.786 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:25:52.786 "strip_size_kb": 0, 00:25:52.786 "state": "online", 00:25:52.786 "raid_level": "raid1", 00:25:52.786 "superblock": true, 00:25:52.786 "num_base_bdevs": 4, 00:25:52.786 "num_base_bdevs_discovered": 3, 00:25:52.786 "num_base_bdevs_operational": 3, 00:25:52.786 "base_bdevs_list": [ 00:25:52.786 { 00:25:52.786 "name": null, 00:25:52.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.786 "is_configured": false, 00:25:52.786 "data_offset": 2048, 00:25:52.786 "data_size": 63488 00:25:52.786 }, 00:25:52.786 { 00:25:52.786 "name": "BaseBdev2", 00:25:52.786 "uuid": "9fe53585-62f8-5b1d-a652-8d2cc22fcb59", 00:25:52.786 "is_configured": true, 00:25:52.786 "data_offset": 2048, 00:25:52.786 "data_size": 63488 00:25:52.786 }, 00:25:52.786 { 00:25:52.786 "name": "BaseBdev3", 00:25:52.786 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:25:52.786 "is_configured": true, 00:25:52.786 "data_offset": 2048, 00:25:52.786 "data_size": 63488 00:25:52.786 }, 00:25:52.786 { 00:25:52.786 "name": "BaseBdev4", 00:25:52.786 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:25:52.786 "is_configured": true, 00:25:52.786 "data_offset": 2048, 00:25:52.786 "data_size": 63488 00:25:52.786 } 00:25:52.786 ] 00:25:52.786 }' 00:25:52.786 13:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:52.786 13:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:52.786 13:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:52.786 13:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:52.786 13:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:53.045 [2024-07-25 13:26:03.440196] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:53.045 13:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:53.045 [2024-07-25 13:26:03.483729] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbad720 00:25:53.045 [2024-07-25 13:26:03.485137] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:53.305 [2024-07-25 13:26:03.613444] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:53.305 [2024-07-25 13:26:03.613719] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:53.305 [2024-07-25 13:26:03.725803] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:53.305 [2024-07-25 13:26:03.726379] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:53.564 [2024-07-25 13:26:04.049160] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:53.564 [2024-07-25 13:26:04.050268] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:53.822 [2024-07-25 13:26:04.261323] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:53.822 [2024-07-25 13:26:04.261476] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:54.080 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:54.080 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:54.080 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:54.080 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:54.080 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:54.080 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.080 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.339 [2024-07-25 13:26:04.636128] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:54.339 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:54.339 "name": "raid_bdev1", 00:25:54.339 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:25:54.339 "strip_size_kb": 0, 00:25:54.339 "state": "online", 00:25:54.339 "raid_level": "raid1", 00:25:54.339 "superblock": true, 00:25:54.339 "num_base_bdevs": 4, 00:25:54.339 "num_base_bdevs_discovered": 4, 00:25:54.339 "num_base_bdevs_operational": 4, 00:25:54.339 "process": { 00:25:54.339 "type": "rebuild", 00:25:54.339 "target": "spare", 00:25:54.339 "progress": { 00:25:54.339 "blocks": 16384, 00:25:54.339 "percent": 25 00:25:54.339 } 00:25:54.339 }, 00:25:54.339 "base_bdevs_list": [ 00:25:54.339 { 00:25:54.339 "name": "spare", 00:25:54.339 "uuid": "1326d8aa-a9d9-589f-b514-5b4058d9331e", 00:25:54.339 "is_configured": true, 00:25:54.339 "data_offset": 2048, 00:25:54.339 "data_size": 63488 00:25:54.339 }, 00:25:54.339 { 00:25:54.339 "name": "BaseBdev2", 00:25:54.339 "uuid": "9fe53585-62f8-5b1d-a652-8d2cc22fcb59", 00:25:54.339 "is_configured": true, 00:25:54.339 "data_offset": 2048, 00:25:54.339 "data_size": 63488 00:25:54.339 }, 00:25:54.339 { 00:25:54.339 "name": "BaseBdev3", 00:25:54.339 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:25:54.339 "is_configured": true, 00:25:54.339 "data_offset": 2048, 00:25:54.339 "data_size": 63488 00:25:54.339 }, 00:25:54.339 { 00:25:54.339 "name": "BaseBdev4", 00:25:54.339 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:25:54.339 "is_configured": true, 00:25:54.339 "data_offset": 2048, 00:25:54.339 "data_size": 63488 00:25:54.339 } 00:25:54.339 ] 00:25:54.339 }' 00:25:54.339 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:54.339 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:54.339 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:54.339 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:54.339 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:25:54.340 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:25:54.340 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:25:54.340 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:25:54.340 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:25:54.340 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:25:54.340 13:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:54.599 [2024-07-25 13:26:04.866078] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:54.599 [2024-07-25 13:26:04.995954] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:54.857 [2024-07-25 13:26:05.095901] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:54.857 [2024-07-25 13:26:05.096479] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:54.857 [2024-07-25 13:26:05.314257] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xa05410 00:25:54.857 [2024-07-25 13:26:05.314280] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xbad720 00:25:55.115 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:25:55.115 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:25:55.115 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:55.115 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:55.115 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:55.115 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:55.115 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:55.115 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.115 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.115 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:55.115 "name": "raid_bdev1", 00:25:55.115 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:25:55.115 "strip_size_kb": 0, 00:25:55.115 "state": "online", 00:25:55.115 "raid_level": "raid1", 00:25:55.115 "superblock": true, 00:25:55.115 "num_base_bdevs": 4, 00:25:55.115 "num_base_bdevs_discovered": 3, 00:25:55.115 "num_base_bdevs_operational": 3, 00:25:55.115 "process": { 00:25:55.115 "type": "rebuild", 00:25:55.115 "target": "spare", 00:25:55.115 "progress": { 00:25:55.115 "blocks": 24576, 00:25:55.115 "percent": 38 00:25:55.115 } 00:25:55.115 }, 00:25:55.115 "base_bdevs_list": [ 00:25:55.115 { 00:25:55.115 "name": "spare", 00:25:55.115 "uuid": "1326d8aa-a9d9-589f-b514-5b4058d9331e", 00:25:55.115 "is_configured": true, 00:25:55.115 "data_offset": 2048, 00:25:55.115 "data_size": 63488 00:25:55.115 }, 00:25:55.115 { 00:25:55.115 "name": null, 00:25:55.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.115 "is_configured": false, 00:25:55.115 "data_offset": 2048, 00:25:55.115 "data_size": 63488 00:25:55.116 }, 00:25:55.116 { 00:25:55.116 "name": "BaseBdev3", 00:25:55.116 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:25:55.116 "is_configured": true, 00:25:55.116 "data_offset": 2048, 00:25:55.116 "data_size": 63488 00:25:55.116 }, 00:25:55.116 { 00:25:55.116 "name": "BaseBdev4", 00:25:55.116 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:25:55.116 "is_configured": true, 00:25:55.116 "data_offset": 2048, 00:25:55.116 "data_size": 63488 00:25:55.116 } 00:25:55.116 ] 00:25:55.116 }' 00:25:55.116 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:55.116 [2024-07-25 13:26:05.589115] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:25:55.374 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:55.374 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:55.374 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:55.374 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=920 00:25:55.374 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:55.374 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:55.374 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:55.374 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:55.374 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:55.374 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:55.374 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.374 13:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.633 [2024-07-25 13:26:06.013964] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:55.891 13:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:55.891 "name": "raid_bdev1", 00:25:55.891 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:25:55.891 "strip_size_kb": 0, 00:25:55.891 "state": "online", 00:25:55.891 "raid_level": "raid1", 00:25:55.891 "superblock": true, 00:25:55.891 "num_base_bdevs": 4, 00:25:55.891 "num_base_bdevs_discovered": 3, 00:25:55.891 "num_base_bdevs_operational": 3, 00:25:55.891 "process": { 00:25:55.891 "type": "rebuild", 00:25:55.891 "target": "spare", 00:25:55.891 "progress": { 00:25:55.891 "blocks": 32768, 00:25:55.891 "percent": 51 00:25:55.891 } 00:25:55.891 }, 00:25:55.891 "base_bdevs_list": [ 00:25:55.891 { 00:25:55.891 "name": "spare", 00:25:55.891 "uuid": "1326d8aa-a9d9-589f-b514-5b4058d9331e", 00:25:55.891 "is_configured": true, 00:25:55.891 "data_offset": 2048, 00:25:55.891 "data_size": 63488 00:25:55.891 }, 00:25:55.891 { 00:25:55.891 "name": null, 00:25:55.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.891 "is_configured": false, 00:25:55.891 "data_offset": 2048, 00:25:55.891 "data_size": 63488 00:25:55.891 }, 00:25:55.891 { 00:25:55.891 "name": "BaseBdev3", 00:25:55.891 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:25:55.891 "is_configured": true, 00:25:55.891 "data_offset": 2048, 00:25:55.891 "data_size": 63488 00:25:55.891 }, 00:25:55.891 { 00:25:55.891 "name": "BaseBdev4", 00:25:55.891 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:25:55.891 "is_configured": true, 00:25:55.891 "data_offset": 2048, 00:25:55.891 "data_size": 63488 00:25:55.891 } 00:25:55.891 ] 00:25:55.891 }' 00:25:55.891 13:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:55.891 13:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:55.891 13:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:55.891 [2024-07-25 13:26:06.235046] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:55.891 13:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:55.891 13:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:56.150 [2024-07-25 13:26:06.624507] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:25:56.408 [2024-07-25 13:26:06.868070] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:25:56.667 [2024-07-25 13:26:07.088702] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:25:56.926 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:56.926 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:56.926 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:56.926 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:56.926 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:56.926 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:56.926 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.926 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.185 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:57.185 "name": "raid_bdev1", 00:25:57.185 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:25:57.185 "strip_size_kb": 0, 00:25:57.185 "state": "online", 00:25:57.185 "raid_level": "raid1", 00:25:57.185 "superblock": true, 00:25:57.185 "num_base_bdevs": 4, 00:25:57.185 "num_base_bdevs_discovered": 3, 00:25:57.185 "num_base_bdevs_operational": 3, 00:25:57.185 "process": { 00:25:57.185 "type": "rebuild", 00:25:57.185 "target": "spare", 00:25:57.185 "progress": { 00:25:57.185 "blocks": 53248, 00:25:57.185 "percent": 83 00:25:57.185 } 00:25:57.185 }, 00:25:57.185 "base_bdevs_list": [ 00:25:57.185 { 00:25:57.185 "name": "spare", 00:25:57.185 "uuid": "1326d8aa-a9d9-589f-b514-5b4058d9331e", 00:25:57.185 "is_configured": true, 00:25:57.185 "data_offset": 2048, 00:25:57.185 "data_size": 63488 00:25:57.185 }, 00:25:57.185 { 00:25:57.185 "name": null, 00:25:57.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.185 "is_configured": false, 00:25:57.185 "data_offset": 2048, 00:25:57.185 "data_size": 63488 00:25:57.185 }, 00:25:57.185 { 00:25:57.185 "name": "BaseBdev3", 00:25:57.185 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:25:57.185 "is_configured": true, 00:25:57.185 "data_offset": 2048, 00:25:57.185 "data_size": 63488 00:25:57.185 }, 00:25:57.185 { 00:25:57.185 "name": "BaseBdev4", 00:25:57.185 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:25:57.185 "is_configured": true, 00:25:57.185 "data_offset": 2048, 00:25:57.185 "data_size": 63488 00:25:57.185 } 00:25:57.185 ] 00:25:57.185 }' 00:25:57.185 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:57.185 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:57.185 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:57.185 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:57.185 13:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:57.443 [2024-07-25 13:26:07.753902] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:25:57.700 [2024-07-25 13:26:08.091998] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:57.959 [2024-07-25 13:26:08.192322] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:57.959 [2024-07-25 13:26:08.202049] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.217 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:58.217 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:58.217 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:58.217 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:58.217 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:58.217 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:58.217 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.217 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.476 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:58.476 "name": "raid_bdev1", 00:25:58.476 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:25:58.476 "strip_size_kb": 0, 00:25:58.476 "state": "online", 00:25:58.476 "raid_level": "raid1", 00:25:58.476 "superblock": true, 00:25:58.476 "num_base_bdevs": 4, 00:25:58.476 "num_base_bdevs_discovered": 3, 00:25:58.476 "num_base_bdevs_operational": 3, 00:25:58.476 "base_bdevs_list": [ 00:25:58.476 { 00:25:58.476 "name": "spare", 00:25:58.476 "uuid": "1326d8aa-a9d9-589f-b514-5b4058d9331e", 00:25:58.476 "is_configured": true, 00:25:58.476 "data_offset": 2048, 00:25:58.476 "data_size": 63488 00:25:58.476 }, 00:25:58.476 { 00:25:58.476 "name": null, 00:25:58.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.477 "is_configured": false, 00:25:58.477 "data_offset": 2048, 00:25:58.477 "data_size": 63488 00:25:58.477 }, 00:25:58.477 { 00:25:58.477 "name": "BaseBdev3", 00:25:58.477 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:25:58.477 "is_configured": true, 00:25:58.477 "data_offset": 2048, 00:25:58.477 "data_size": 63488 00:25:58.477 }, 00:25:58.477 { 00:25:58.477 "name": "BaseBdev4", 00:25:58.477 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:25:58.477 "is_configured": true, 00:25:58.477 "data_offset": 2048, 00:25:58.477 "data_size": 63488 00:25:58.477 } 00:25:58.477 ] 00:25:58.477 }' 00:25:58.477 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:58.477 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:58.477 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:58.477 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:58.477 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:25:58.477 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:58.477 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:58.477 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:58.477 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:58.477 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:58.477 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.477 13:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.736 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:58.736 "name": "raid_bdev1", 00:25:58.737 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:25:58.737 "strip_size_kb": 0, 00:25:58.737 "state": "online", 00:25:58.737 "raid_level": "raid1", 00:25:58.737 "superblock": true, 00:25:58.737 "num_base_bdevs": 4, 00:25:58.737 "num_base_bdevs_discovered": 3, 00:25:58.737 "num_base_bdevs_operational": 3, 00:25:58.737 "base_bdevs_list": [ 00:25:58.737 { 00:25:58.737 "name": "spare", 00:25:58.737 "uuid": "1326d8aa-a9d9-589f-b514-5b4058d9331e", 00:25:58.737 "is_configured": true, 00:25:58.737 "data_offset": 2048, 00:25:58.737 "data_size": 63488 00:25:58.737 }, 00:25:58.737 { 00:25:58.737 "name": null, 00:25:58.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.737 "is_configured": false, 00:25:58.737 "data_offset": 2048, 00:25:58.737 "data_size": 63488 00:25:58.737 }, 00:25:58.737 { 00:25:58.737 "name": "BaseBdev3", 00:25:58.737 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:25:58.737 "is_configured": true, 00:25:58.737 "data_offset": 2048, 00:25:58.737 "data_size": 63488 00:25:58.737 }, 00:25:58.737 { 00:25:58.737 "name": "BaseBdev4", 00:25:58.737 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:25:58.737 "is_configured": true, 00:25:58.737 "data_offset": 2048, 00:25:58.737 "data_size": 63488 00:25:58.737 } 00:25:58.737 ] 00:25:58.737 }' 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.737 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.997 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.997 "name": "raid_bdev1", 00:25:58.997 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:25:58.997 "strip_size_kb": 0, 00:25:58.997 "state": "online", 00:25:58.997 "raid_level": "raid1", 00:25:58.997 "superblock": true, 00:25:58.997 "num_base_bdevs": 4, 00:25:58.997 "num_base_bdevs_discovered": 3, 00:25:58.997 "num_base_bdevs_operational": 3, 00:25:58.997 "base_bdevs_list": [ 00:25:58.997 { 00:25:58.997 "name": "spare", 00:25:58.997 "uuid": "1326d8aa-a9d9-589f-b514-5b4058d9331e", 00:25:58.997 "is_configured": true, 00:25:58.997 "data_offset": 2048, 00:25:58.997 "data_size": 63488 00:25:58.997 }, 00:25:58.997 { 00:25:58.997 "name": null, 00:25:58.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.997 "is_configured": false, 00:25:58.997 "data_offset": 2048, 00:25:58.997 "data_size": 63488 00:25:58.997 }, 00:25:58.997 { 00:25:58.997 "name": "BaseBdev3", 00:25:58.997 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:25:58.997 "is_configured": true, 00:25:58.997 "data_offset": 2048, 00:25:58.997 "data_size": 63488 00:25:58.997 }, 00:25:58.997 { 00:25:58.997 "name": "BaseBdev4", 00:25:58.997 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:25:58.997 "is_configured": true, 00:25:58.997 "data_offset": 2048, 00:25:58.997 "data_size": 63488 00:25:58.997 } 00:25:58.997 ] 00:25:58.997 }' 00:25:58.997 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.997 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.566 13:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:59.825 [2024-07-25 13:26:10.130802] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:59.825 [2024-07-25 13:26:10.130833] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:59.825 00:25:59.825 Latency(us) 00:25:59.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.825 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:59.825 raid_bdev1 : 11.00 96.37 289.12 0.00 0.00 14010.54 268.70 119957.09 00:25:59.825 =================================================================================================================== 00:25:59.825 Total : 96.37 289.12 0.00 0.00 14010.54 268.70 119957.09 00:25:59.825 [2024-07-25 13:26:10.182557] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.825 [2024-07-25 13:26:10.182583] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:59.825 [2024-07-25 13:26:10.182668] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:59.825 [2024-07-25 13:26:10.182679] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9ffe80 name raid_bdev1, state offline 00:25:59.825 0 00:25:59.825 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.825 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:00.084 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:26:00.343 /dev/nbd0 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:00.343 1+0 records in 00:26:00.343 1+0 records out 00:26:00.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255973 s, 16.0 MB/s 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # continue 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:00.343 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:26:00.603 /dev/nbd1 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:00.603 1+0 records in 00:26:00.603 1+0 records out 00:26:00.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028903 s, 14.2 MB/s 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:00.603 13:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:26:00.862 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:00.863 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:00.863 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:26:01.122 /dev/nbd1 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:01.122 1+0 records in 00:26:01.122 1+0 records out 00:26:01.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162241 s, 25.2 MB/s 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:01.122 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:01.381 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:01.640 13:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:01.640 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:01.640 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:01.640 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:01.640 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:01.640 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:01.640 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:26:01.640 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:26:01.640 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:26:01.640 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:01.899 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:02.158 [2024-07-25 13:26:12.434152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:02.158 [2024-07-25 13:26:12.434193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.158 [2024-07-25 13:26:12.434213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa030e0 00:26:02.158 [2024-07-25 13:26:12.434225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.158 [2024-07-25 13:26:12.435739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.158 [2024-07-25 13:26:12.435766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:02.158 [2024-07-25 13:26:12.435838] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:02.158 [2024-07-25 13:26:12.435862] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:02.158 [2024-07-25 13:26:12.435961] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:02.158 [2024-07-25 13:26:12.436027] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:02.158 spare 00:26:02.158 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:02.158 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:02.158 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:02.158 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:02.158 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:02.158 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:02.158 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:02.158 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:02.158 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:02.158 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:02.158 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.158 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.158 [2024-07-25 13:26:12.536340] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xba0d70 00:26:02.158 [2024-07-25 13:26:12.536356] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:02.158 [2024-07-25 13:26:12.536539] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa001f0 00:26:02.158 [2024-07-25 13:26:12.536677] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xba0d70 00:26:02.158 [2024-07-25 13:26:12.536687] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xba0d70 00:26:02.158 [2024-07-25 13:26:12.536785] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.418 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:02.418 "name": "raid_bdev1", 00:26:02.418 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:26:02.418 "strip_size_kb": 0, 00:26:02.418 "state": "online", 00:26:02.418 "raid_level": "raid1", 00:26:02.418 "superblock": true, 00:26:02.418 "num_base_bdevs": 4, 00:26:02.418 "num_base_bdevs_discovered": 3, 00:26:02.418 "num_base_bdevs_operational": 3, 00:26:02.418 "base_bdevs_list": [ 00:26:02.418 { 00:26:02.418 "name": "spare", 00:26:02.418 "uuid": "1326d8aa-a9d9-589f-b514-5b4058d9331e", 00:26:02.418 "is_configured": true, 00:26:02.418 "data_offset": 2048, 00:26:02.418 "data_size": 63488 00:26:02.418 }, 00:26:02.418 { 00:26:02.418 "name": null, 00:26:02.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.418 "is_configured": false, 00:26:02.418 "data_offset": 2048, 00:26:02.418 "data_size": 63488 00:26:02.418 }, 00:26:02.418 { 00:26:02.418 "name": "BaseBdev3", 00:26:02.418 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:26:02.418 "is_configured": true, 00:26:02.418 "data_offset": 2048, 00:26:02.418 "data_size": 63488 00:26:02.418 }, 00:26:02.418 { 00:26:02.418 "name": "BaseBdev4", 00:26:02.418 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:26:02.418 "is_configured": true, 00:26:02.418 "data_offset": 2048, 00:26:02.418 "data_size": 63488 00:26:02.418 } 00:26:02.418 ] 00:26:02.418 }' 00:26:02.418 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:02.418 13:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:03.383 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:03.383 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:03.384 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:03.384 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:03.384 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:03.384 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.384 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.384 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:03.384 "name": "raid_bdev1", 00:26:03.384 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:26:03.384 "strip_size_kb": 0, 00:26:03.384 "state": "online", 00:26:03.384 "raid_level": "raid1", 00:26:03.384 "superblock": true, 00:26:03.384 "num_base_bdevs": 4, 00:26:03.384 "num_base_bdevs_discovered": 3, 00:26:03.384 "num_base_bdevs_operational": 3, 00:26:03.384 "base_bdevs_list": [ 00:26:03.384 { 00:26:03.384 "name": "spare", 00:26:03.384 "uuid": "1326d8aa-a9d9-589f-b514-5b4058d9331e", 00:26:03.384 "is_configured": true, 00:26:03.384 "data_offset": 2048, 00:26:03.384 "data_size": 63488 00:26:03.384 }, 00:26:03.384 { 00:26:03.384 "name": null, 00:26:03.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.384 "is_configured": false, 00:26:03.384 "data_offset": 2048, 00:26:03.384 "data_size": 63488 00:26:03.384 }, 00:26:03.384 { 00:26:03.384 "name": "BaseBdev3", 00:26:03.384 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:26:03.384 "is_configured": true, 00:26:03.384 "data_offset": 2048, 00:26:03.384 "data_size": 63488 00:26:03.384 }, 00:26:03.384 { 00:26:03.384 "name": "BaseBdev4", 00:26:03.384 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:26:03.384 "is_configured": true, 00:26:03.384 "data_offset": 2048, 00:26:03.384 "data_size": 63488 00:26:03.384 } 00:26:03.384 ] 00:26:03.384 }' 00:26:03.384 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:03.384 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:03.384 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:03.384 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:03.384 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.384 13:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:03.643 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:26:03.643 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:03.901 [2024-07-25 13:26:14.275351] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:03.901 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:03.901 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:03.901 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:03.901 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:03.901 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:03.901 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:03.901 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:03.901 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:03.901 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:03.901 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:03.901 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.901 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.468 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.468 "name": "raid_bdev1", 00:26:04.468 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:26:04.468 "strip_size_kb": 0, 00:26:04.468 "state": "online", 00:26:04.468 "raid_level": "raid1", 00:26:04.468 "superblock": true, 00:26:04.468 "num_base_bdevs": 4, 00:26:04.468 "num_base_bdevs_discovered": 2, 00:26:04.468 "num_base_bdevs_operational": 2, 00:26:04.468 "base_bdevs_list": [ 00:26:04.468 { 00:26:04.468 "name": null, 00:26:04.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.468 "is_configured": false, 00:26:04.468 "data_offset": 2048, 00:26:04.468 "data_size": 63488 00:26:04.468 }, 00:26:04.468 { 00:26:04.468 "name": null, 00:26:04.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.468 "is_configured": false, 00:26:04.468 "data_offset": 2048, 00:26:04.468 "data_size": 63488 00:26:04.468 }, 00:26:04.468 { 00:26:04.468 "name": "BaseBdev3", 00:26:04.468 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:26:04.468 "is_configured": true, 00:26:04.468 "data_offset": 2048, 00:26:04.468 "data_size": 63488 00:26:04.468 }, 00:26:04.468 { 00:26:04.468 "name": "BaseBdev4", 00:26:04.468 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:26:04.468 "is_configured": true, 00:26:04.468 "data_offset": 2048, 00:26:04.468 "data_size": 63488 00:26:04.468 } 00:26:04.468 ] 00:26:04.468 }' 00:26:04.468 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.468 13:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:05.404 13:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:05.404 [2024-07-25 13:26:15.827667] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:05.404 [2024-07-25 13:26:15.827806] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:05.404 [2024-07-25 13:26:15.827822] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:05.404 [2024-07-25 13:26:15.827848] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:05.404 [2024-07-25 13:26:15.832045] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa00230 00:26:05.404 [2024-07-25 13:26:15.834081] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:05.404 13:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:26:06.781 13:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:06.781 13:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:06.781 13:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:06.781 13:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:06.781 13:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:06.781 13:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.781 13:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.781 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:06.781 "name": "raid_bdev1", 00:26:06.781 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:26:06.781 "strip_size_kb": 0, 00:26:06.781 "state": "online", 00:26:06.781 "raid_level": "raid1", 00:26:06.781 "superblock": true, 00:26:06.781 "num_base_bdevs": 4, 00:26:06.781 "num_base_bdevs_discovered": 3, 00:26:06.781 "num_base_bdevs_operational": 3, 00:26:06.781 "process": { 00:26:06.781 "type": "rebuild", 00:26:06.781 "target": "spare", 00:26:06.781 "progress": { 00:26:06.781 "blocks": 24576, 00:26:06.781 "percent": 38 00:26:06.781 } 00:26:06.781 }, 00:26:06.781 "base_bdevs_list": [ 00:26:06.781 { 00:26:06.781 "name": "spare", 00:26:06.781 "uuid": "1326d8aa-a9d9-589f-b514-5b4058d9331e", 00:26:06.781 "is_configured": true, 00:26:06.781 "data_offset": 2048, 00:26:06.781 "data_size": 63488 00:26:06.781 }, 00:26:06.781 { 00:26:06.781 "name": null, 00:26:06.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.781 "is_configured": false, 00:26:06.781 "data_offset": 2048, 00:26:06.781 "data_size": 63488 00:26:06.781 }, 00:26:06.781 { 00:26:06.781 "name": "BaseBdev3", 00:26:06.781 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:26:06.781 "is_configured": true, 00:26:06.781 "data_offset": 2048, 00:26:06.781 "data_size": 63488 00:26:06.781 }, 00:26:06.781 { 00:26:06.781 "name": "BaseBdev4", 00:26:06.781 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:26:06.781 "is_configured": true, 00:26:06.781 "data_offset": 2048, 00:26:06.781 "data_size": 63488 00:26:06.781 } 00:26:06.781 ] 00:26:06.781 }' 00:26:06.781 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:06.781 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:06.781 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:06.781 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:06.781 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:07.040 [2024-07-25 13:26:17.366719] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:07.040 [2024-07-25 13:26:17.445895] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:07.040 [2024-07-25 13:26:17.445942] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.040 [2024-07-25 13:26:17.445958] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:07.040 [2024-07-25 13:26:17.445965] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:07.040 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:07.040 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:07.040 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:07.040 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:07.040 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:07.040 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:07.040 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:07.040 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:07.040 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:07.040 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:07.041 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.041 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.299 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.299 "name": "raid_bdev1", 00:26:07.299 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:26:07.299 "strip_size_kb": 0, 00:26:07.300 "state": "online", 00:26:07.300 "raid_level": "raid1", 00:26:07.300 "superblock": true, 00:26:07.300 "num_base_bdevs": 4, 00:26:07.300 "num_base_bdevs_discovered": 2, 00:26:07.300 "num_base_bdevs_operational": 2, 00:26:07.300 "base_bdevs_list": [ 00:26:07.300 { 00:26:07.300 "name": null, 00:26:07.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.300 "is_configured": false, 00:26:07.300 "data_offset": 2048, 00:26:07.300 "data_size": 63488 00:26:07.300 }, 00:26:07.300 { 00:26:07.300 "name": null, 00:26:07.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.300 "is_configured": false, 00:26:07.300 "data_offset": 2048, 00:26:07.300 "data_size": 63488 00:26:07.300 }, 00:26:07.300 { 00:26:07.300 "name": "BaseBdev3", 00:26:07.300 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:26:07.300 "is_configured": true, 00:26:07.300 "data_offset": 2048, 00:26:07.300 "data_size": 63488 00:26:07.300 }, 00:26:07.300 { 00:26:07.300 "name": "BaseBdev4", 00:26:07.300 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:26:07.300 "is_configured": true, 00:26:07.300 "data_offset": 2048, 00:26:07.300 "data_size": 63488 00:26:07.300 } 00:26:07.300 ] 00:26:07.300 }' 00:26:07.300 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.300 13:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:07.867 13:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:08.126 [2024-07-25 13:26:18.440702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:08.126 [2024-07-25 13:26:18.440745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.126 [2024-07-25 13:26:18.440766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa04a20 00:26:08.126 [2024-07-25 13:26:18.440777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.126 [2024-07-25 13:26:18.441120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.126 [2024-07-25 13:26:18.441145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:08.126 [2024-07-25 13:26:18.441218] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:08.126 [2024-07-25 13:26:18.441229] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:08.126 [2024-07-25 13:26:18.441239] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:08.126 [2024-07-25 13:26:18.441255] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:08.126 [2024-07-25 13:26:18.445468] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa00230 00:26:08.126 spare 00:26:08.126 [2024-07-25 13:26:18.446833] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:08.126 13:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:26:09.062 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:09.062 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:09.062 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:09.062 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:09.062 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:09.062 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.062 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.321 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:09.321 "name": "raid_bdev1", 00:26:09.321 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:26:09.321 "strip_size_kb": 0, 00:26:09.321 "state": "online", 00:26:09.321 "raid_level": "raid1", 00:26:09.321 "superblock": true, 00:26:09.321 "num_base_bdevs": 4, 00:26:09.321 "num_base_bdevs_discovered": 3, 00:26:09.321 "num_base_bdevs_operational": 3, 00:26:09.321 "process": { 00:26:09.321 "type": "rebuild", 00:26:09.321 "target": "spare", 00:26:09.321 "progress": { 00:26:09.321 "blocks": 24576, 00:26:09.321 "percent": 38 00:26:09.321 } 00:26:09.321 }, 00:26:09.321 "base_bdevs_list": [ 00:26:09.321 { 00:26:09.321 "name": "spare", 00:26:09.321 "uuid": "1326d8aa-a9d9-589f-b514-5b4058d9331e", 00:26:09.321 "is_configured": true, 00:26:09.321 "data_offset": 2048, 00:26:09.321 "data_size": 63488 00:26:09.321 }, 00:26:09.321 { 00:26:09.321 "name": null, 00:26:09.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.321 "is_configured": false, 00:26:09.321 "data_offset": 2048, 00:26:09.321 "data_size": 63488 00:26:09.321 }, 00:26:09.321 { 00:26:09.321 "name": "BaseBdev3", 00:26:09.321 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:26:09.321 "is_configured": true, 00:26:09.321 "data_offset": 2048, 00:26:09.321 "data_size": 63488 00:26:09.321 }, 00:26:09.321 { 00:26:09.321 "name": "BaseBdev4", 00:26:09.321 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:26:09.321 "is_configured": true, 00:26:09.321 "data_offset": 2048, 00:26:09.321 "data_size": 63488 00:26:09.321 } 00:26:09.321 ] 00:26:09.321 }' 00:26:09.321 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:09.321 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:09.321 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:09.321 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:09.321 13:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:09.580 [2024-07-25 13:26:20.014555] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:09.580 [2024-07-25 13:26:20.058623] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:09.580 [2024-07-25 13:26:20.058667] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:09.580 [2024-07-25 13:26:20.058681] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:09.580 [2024-07-25 13:26:20.058689] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:09.839 "name": "raid_bdev1", 00:26:09.839 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:26:09.839 "strip_size_kb": 0, 00:26:09.839 "state": "online", 00:26:09.839 "raid_level": "raid1", 00:26:09.839 "superblock": true, 00:26:09.839 "num_base_bdevs": 4, 00:26:09.839 "num_base_bdevs_discovered": 2, 00:26:09.839 "num_base_bdevs_operational": 2, 00:26:09.839 "base_bdevs_list": [ 00:26:09.839 { 00:26:09.839 "name": null, 00:26:09.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.839 "is_configured": false, 00:26:09.839 "data_offset": 2048, 00:26:09.839 "data_size": 63488 00:26:09.839 }, 00:26:09.839 { 00:26:09.839 "name": null, 00:26:09.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.839 "is_configured": false, 00:26:09.839 "data_offset": 2048, 00:26:09.839 "data_size": 63488 00:26:09.839 }, 00:26:09.839 { 00:26:09.839 "name": "BaseBdev3", 00:26:09.839 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:26:09.839 "is_configured": true, 00:26:09.839 "data_offset": 2048, 00:26:09.839 "data_size": 63488 00:26:09.839 }, 00:26:09.839 { 00:26:09.839 "name": "BaseBdev4", 00:26:09.839 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:26:09.839 "is_configured": true, 00:26:09.839 "data_offset": 2048, 00:26:09.839 "data_size": 63488 00:26:09.839 } 00:26:09.839 ] 00:26:09.839 }' 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:09.839 13:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:10.775 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:10.775 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:10.775 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:10.775 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:10.775 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:10.775 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.775 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.034 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:11.034 "name": "raid_bdev1", 00:26:11.034 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:26:11.034 "strip_size_kb": 0, 00:26:11.034 "state": "online", 00:26:11.034 "raid_level": "raid1", 00:26:11.034 "superblock": true, 00:26:11.034 "num_base_bdevs": 4, 00:26:11.034 "num_base_bdevs_discovered": 2, 00:26:11.034 "num_base_bdevs_operational": 2, 00:26:11.034 "base_bdevs_list": [ 00:26:11.034 { 00:26:11.034 "name": null, 00:26:11.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.034 "is_configured": false, 00:26:11.034 "data_offset": 2048, 00:26:11.034 "data_size": 63488 00:26:11.034 }, 00:26:11.034 { 00:26:11.034 "name": null, 00:26:11.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.034 "is_configured": false, 00:26:11.034 "data_offset": 2048, 00:26:11.034 "data_size": 63488 00:26:11.034 }, 00:26:11.034 { 00:26:11.034 "name": "BaseBdev3", 00:26:11.034 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:26:11.034 "is_configured": true, 00:26:11.034 "data_offset": 2048, 00:26:11.034 "data_size": 63488 00:26:11.034 }, 00:26:11.034 { 00:26:11.034 "name": "BaseBdev4", 00:26:11.034 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:26:11.034 "is_configured": true, 00:26:11.034 "data_offset": 2048, 00:26:11.034 "data_size": 63488 00:26:11.034 } 00:26:11.034 ] 00:26:11.034 }' 00:26:11.034 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:11.034 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:11.034 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:11.034 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:11.034 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:11.293 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:11.551 [2024-07-25 13:26:21.867768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:11.551 [2024-07-25 13:26:21.867812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.551 [2024-07-25 13:26:21.867830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa050f0 00:26:11.551 [2024-07-25 13:26:21.867841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.551 [2024-07-25 13:26:21.868163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.551 [2024-07-25 13:26:21.868179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:11.551 [2024-07-25 13:26:21.868238] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:11.551 [2024-07-25 13:26:21.868249] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:26:11.551 [2024-07-25 13:26:21.868258] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:11.551 BaseBdev1 00:26:11.551 13:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:26:12.488 13:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:12.488 13:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:12.488 13:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:12.488 13:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:12.488 13:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:12.488 13:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:12.488 13:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:12.488 13:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:12.488 13:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:12.488 13:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:12.488 13:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.488 13:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.747 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:12.747 "name": "raid_bdev1", 00:26:12.747 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:26:12.747 "strip_size_kb": 0, 00:26:12.747 "state": "online", 00:26:12.747 "raid_level": "raid1", 00:26:12.747 "superblock": true, 00:26:12.747 "num_base_bdevs": 4, 00:26:12.747 "num_base_bdevs_discovered": 2, 00:26:12.747 "num_base_bdevs_operational": 2, 00:26:12.747 "base_bdevs_list": [ 00:26:12.747 { 00:26:12.747 "name": null, 00:26:12.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.747 "is_configured": false, 00:26:12.747 "data_offset": 2048, 00:26:12.747 "data_size": 63488 00:26:12.747 }, 00:26:12.747 { 00:26:12.747 "name": null, 00:26:12.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.747 "is_configured": false, 00:26:12.747 "data_offset": 2048, 00:26:12.747 "data_size": 63488 00:26:12.747 }, 00:26:12.747 { 00:26:12.747 "name": "BaseBdev3", 00:26:12.747 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:26:12.747 "is_configured": true, 00:26:12.747 "data_offset": 2048, 00:26:12.747 "data_size": 63488 00:26:12.747 }, 00:26:12.747 { 00:26:12.747 "name": "BaseBdev4", 00:26:12.747 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:26:12.747 "is_configured": true, 00:26:12.747 "data_offset": 2048, 00:26:12.747 "data_size": 63488 00:26:12.747 } 00:26:12.747 ] 00:26:12.747 }' 00:26:12.747 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:12.747 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:13.314 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:13.314 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:13.314 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:13.314 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:13.314 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:13.314 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.314 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.573 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:13.573 "name": "raid_bdev1", 00:26:13.573 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:26:13.573 "strip_size_kb": 0, 00:26:13.573 "state": "online", 00:26:13.573 "raid_level": "raid1", 00:26:13.573 "superblock": true, 00:26:13.573 "num_base_bdevs": 4, 00:26:13.573 "num_base_bdevs_discovered": 2, 00:26:13.573 "num_base_bdevs_operational": 2, 00:26:13.573 "base_bdevs_list": [ 00:26:13.573 { 00:26:13.573 "name": null, 00:26:13.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.573 "is_configured": false, 00:26:13.573 "data_offset": 2048, 00:26:13.573 "data_size": 63488 00:26:13.573 }, 00:26:13.573 { 00:26:13.573 "name": null, 00:26:13.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.573 "is_configured": false, 00:26:13.573 "data_offset": 2048, 00:26:13.573 "data_size": 63488 00:26:13.573 }, 00:26:13.573 { 00:26:13.573 "name": "BaseBdev3", 00:26:13.573 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:26:13.573 "is_configured": true, 00:26:13.573 "data_offset": 2048, 00:26:13.573 "data_size": 63488 00:26:13.573 }, 00:26:13.573 { 00:26:13.573 "name": "BaseBdev4", 00:26:13.573 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:26:13.573 "is_configured": true, 00:26:13.573 "data_offset": 2048, 00:26:13.573 "data_size": 63488 00:26:13.573 } 00:26:13.573 ] 00:26:13.573 }' 00:26:13.573 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:13.573 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:13.573 13:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:13.573 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:13.573 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:13.573 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:26:13.573 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:13.573 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:13.574 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:13.574 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:13.574 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:13.574 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:13.574 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:13.574 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:13.574 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:13.574 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:13.832 [2024-07-25 13:26:24.246313] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:13.832 [2024-07-25 13:26:24.246430] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:26:13.832 [2024-07-25 13:26:24.246445] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:13.832 request: 00:26:13.832 { 00:26:13.832 "base_bdev": "BaseBdev1", 00:26:13.832 "raid_bdev": "raid_bdev1", 00:26:13.832 "method": "bdev_raid_add_base_bdev", 00:26:13.832 "req_id": 1 00:26:13.832 } 00:26:13.832 Got JSON-RPC error response 00:26:13.832 response: 00:26:13.832 { 00:26:13.832 "code": -22, 00:26:13.832 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:13.832 } 00:26:13.832 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:26:13.832 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:13.832 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:13.832 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:13.832 13:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:26:15.207 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:15.207 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.208 "name": "raid_bdev1", 00:26:15.208 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:26:15.208 "strip_size_kb": 0, 00:26:15.208 "state": "online", 00:26:15.208 "raid_level": "raid1", 00:26:15.208 "superblock": true, 00:26:15.208 "num_base_bdevs": 4, 00:26:15.208 "num_base_bdevs_discovered": 2, 00:26:15.208 "num_base_bdevs_operational": 2, 00:26:15.208 "base_bdevs_list": [ 00:26:15.208 { 00:26:15.208 "name": null, 00:26:15.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.208 "is_configured": false, 00:26:15.208 "data_offset": 2048, 00:26:15.208 "data_size": 63488 00:26:15.208 }, 00:26:15.208 { 00:26:15.208 "name": null, 00:26:15.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.208 "is_configured": false, 00:26:15.208 "data_offset": 2048, 00:26:15.208 "data_size": 63488 00:26:15.208 }, 00:26:15.208 { 00:26:15.208 "name": "BaseBdev3", 00:26:15.208 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:26:15.208 "is_configured": true, 00:26:15.208 "data_offset": 2048, 00:26:15.208 "data_size": 63488 00:26:15.208 }, 00:26:15.208 { 00:26:15.208 "name": "BaseBdev4", 00:26:15.208 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:26:15.208 "is_configured": true, 00:26:15.208 "data_offset": 2048, 00:26:15.208 "data_size": 63488 00:26:15.208 } 00:26:15.208 ] 00:26:15.208 }' 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.208 13:26:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:15.774 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:15.774 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:15.774 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:15.774 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:15.774 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:15.774 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.774 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.032 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:16.032 "name": "raid_bdev1", 00:26:16.032 "uuid": "a5fbc4f2-18dd-4534-96a0-5cbbcf7849ae", 00:26:16.032 "strip_size_kb": 0, 00:26:16.032 "state": "online", 00:26:16.032 "raid_level": "raid1", 00:26:16.032 "superblock": true, 00:26:16.032 "num_base_bdevs": 4, 00:26:16.032 "num_base_bdevs_discovered": 2, 00:26:16.032 "num_base_bdevs_operational": 2, 00:26:16.032 "base_bdevs_list": [ 00:26:16.032 { 00:26:16.032 "name": null, 00:26:16.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.032 "is_configured": false, 00:26:16.032 "data_offset": 2048, 00:26:16.032 "data_size": 63488 00:26:16.032 }, 00:26:16.032 { 00:26:16.032 "name": null, 00:26:16.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.032 "is_configured": false, 00:26:16.032 "data_offset": 2048, 00:26:16.032 "data_size": 63488 00:26:16.032 }, 00:26:16.032 { 00:26:16.032 "name": "BaseBdev3", 00:26:16.032 "uuid": "bb979809-61c3-5696-bf1e-abc07aecc0c5", 00:26:16.032 "is_configured": true, 00:26:16.033 "data_offset": 2048, 00:26:16.033 "data_size": 63488 00:26:16.033 }, 00:26:16.033 { 00:26:16.033 "name": "BaseBdev4", 00:26:16.033 "uuid": "dd584666-97ee-59a5-9a96-9b5cace71961", 00:26:16.033 "is_configured": true, 00:26:16.033 "data_offset": 2048, 00:26:16.033 "data_size": 63488 00:26:16.033 } 00:26:16.033 ] 00:26:16.033 }' 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 988292 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 988292 ']' 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 988292 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 988292 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 988292' 00:26:16.033 killing process with pid 988292 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 988292 00:26:16.033 Received shutdown signal, test time was about 27.193198 seconds 00:26:16.033 00:26:16.033 Latency(us) 00:26:16.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.033 =================================================================================================================== 00:26:16.033 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.033 [2024-07-25 13:26:26.412632] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:16.033 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 988292 00:26:16.033 [2024-07-25 13:26:26.412717] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:16.033 [2024-07-25 13:26:26.412770] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:16.033 [2024-07-25 13:26:26.412781] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xba0d70 name raid_bdev1, state offline 00:26:16.033 [2024-07-25 13:26:26.447184] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:16.292 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:26:16.292 00:26:16.292 real 0m32.549s 00:26:16.292 user 0m51.351s 00:26:16.292 sys 0m5.010s 00:26:16.292 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:16.292 13:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:16.292 ************************************ 00:26:16.292 END TEST raid_rebuild_test_sb_io 00:26:16.292 ************************************ 00:26:16.292 13:26:26 bdev_raid -- bdev/bdev_raid.sh@964 -- # '[' n == y ']' 00:26:16.292 13:26:26 bdev_raid -- bdev/bdev_raid.sh@976 -- # base_blocklen=4096 00:26:16.292 13:26:26 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:26:16.292 13:26:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:16.292 13:26:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:16.292 13:26:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:16.292 ************************************ 00:26:16.292 START TEST raid_state_function_test_sb_4k 00:26:16.292 ************************************ 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=994309 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 994309' 00:26:16.292 Process raid pid: 994309 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 994309 /var/tmp/spdk-raid.sock 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 994309 ']' 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:16.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:16.292 13:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:16.292 [2024-07-25 13:26:26.779061] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:26:16.292 [2024-07-25 13:26:26.779115] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.551 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.552 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.552 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.552 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.552 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.552 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.552 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.552 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.552 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.552 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:16.552 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:16.552 [2024-07-25 13:26:26.910127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.552 [2024-07-25 13:26:26.996907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.811 [2024-07-25 13:26:27.063346] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:16.811 [2024-07-25 13:26:27.063380] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:17.378 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:17.378 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:26:17.378 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:17.378 [2024-07-25 13:26:27.859314] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:17.378 [2024-07-25 13:26:27.859351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:17.378 [2024-07-25 13:26:27.859362] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:17.378 [2024-07-25 13:26:27.859372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:17.637 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:17.637 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:17.637 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:17.637 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:17.637 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:17.637 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:17.637 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.637 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.637 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.637 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.637 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.637 13:26:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.637 13:26:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:17.637 "name": "Existed_Raid", 00:26:17.637 "uuid": "fe901741-29e9-4ac1-a069-92ed0d574ab9", 00:26:17.637 "strip_size_kb": 0, 00:26:17.637 "state": "configuring", 00:26:17.637 "raid_level": "raid1", 00:26:17.637 "superblock": true, 00:26:17.637 "num_base_bdevs": 2, 00:26:17.637 "num_base_bdevs_discovered": 0, 00:26:17.637 "num_base_bdevs_operational": 2, 00:26:17.637 "base_bdevs_list": [ 00:26:17.637 { 00:26:17.637 "name": "BaseBdev1", 00:26:17.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.637 "is_configured": false, 00:26:17.637 "data_offset": 0, 00:26:17.637 "data_size": 0 00:26:17.637 }, 00:26:17.637 { 00:26:17.637 "name": "BaseBdev2", 00:26:17.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.637 "is_configured": false, 00:26:17.637 "data_offset": 0, 00:26:17.637 "data_size": 0 00:26:17.637 } 00:26:17.637 ] 00:26:17.637 }' 00:26:17.637 13:26:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:17.637 13:26:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:18.204 13:26:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:18.802 [2024-07-25 13:26:29.110470] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:18.802 [2024-07-25 13:26:29.110499] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25f0f20 name Existed_Raid, state configuring 00:26:18.802 13:26:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:19.091 [2024-07-25 13:26:29.339080] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:19.091 [2024-07-25 13:26:29.339108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:19.091 [2024-07-25 13:26:29.339117] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:19.092 [2024-07-25 13:26:29.339127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:19.092 13:26:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:26:19.659 [2024-07-25 13:26:29.841899] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:19.659 BaseBdev1 00:26:19.659 13:26:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:19.659 13:26:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:19.659 13:26:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:19.659 13:26:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:26:19.659 13:26:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:19.659 13:26:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:19.659 13:26:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:19.659 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:19.917 [ 00:26:19.918 { 00:26:19.918 "name": "BaseBdev1", 00:26:19.918 "aliases": [ 00:26:19.918 "ae9eb81c-5e5a-4f39-8ef5-00c118ab77b0" 00:26:19.918 ], 00:26:19.918 "product_name": "Malloc disk", 00:26:19.918 "block_size": 4096, 00:26:19.918 "num_blocks": 8192, 00:26:19.918 "uuid": "ae9eb81c-5e5a-4f39-8ef5-00c118ab77b0", 00:26:19.918 "assigned_rate_limits": { 00:26:19.918 "rw_ios_per_sec": 0, 00:26:19.918 "rw_mbytes_per_sec": 0, 00:26:19.918 "r_mbytes_per_sec": 0, 00:26:19.918 "w_mbytes_per_sec": 0 00:26:19.918 }, 00:26:19.918 "claimed": true, 00:26:19.918 "claim_type": "exclusive_write", 00:26:19.918 "zoned": false, 00:26:19.918 "supported_io_types": { 00:26:19.918 "read": true, 00:26:19.918 "write": true, 00:26:19.918 "unmap": true, 00:26:19.918 "flush": true, 00:26:19.918 "reset": true, 00:26:19.918 "nvme_admin": false, 00:26:19.918 "nvme_io": false, 00:26:19.918 "nvme_io_md": false, 00:26:19.918 "write_zeroes": true, 00:26:19.918 "zcopy": true, 00:26:19.918 "get_zone_info": false, 00:26:19.918 "zone_management": false, 00:26:19.918 "zone_append": false, 00:26:19.918 "compare": false, 00:26:19.918 "compare_and_write": false, 00:26:19.918 "abort": true, 00:26:19.918 "seek_hole": false, 00:26:19.918 "seek_data": false, 00:26:19.918 "copy": true, 00:26:19.918 "nvme_iov_md": false 00:26:19.918 }, 00:26:19.918 "memory_domains": [ 00:26:19.918 { 00:26:19.918 "dma_device_id": "system", 00:26:19.918 "dma_device_type": 1 00:26:19.918 }, 00:26:19.918 { 00:26:19.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:19.918 "dma_device_type": 2 00:26:19.918 } 00:26:19.918 ], 00:26:19.918 "driver_specific": {} 00:26:19.918 } 00:26:19.918 ] 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.918 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:20.176 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:20.176 "name": "Existed_Raid", 00:26:20.176 "uuid": "ffb2afb7-1cf9-4588-a18a-3c7bb00b4536", 00:26:20.176 "strip_size_kb": 0, 00:26:20.176 "state": "configuring", 00:26:20.176 "raid_level": "raid1", 00:26:20.176 "superblock": true, 00:26:20.176 "num_base_bdevs": 2, 00:26:20.176 "num_base_bdevs_discovered": 1, 00:26:20.176 "num_base_bdevs_operational": 2, 00:26:20.176 "base_bdevs_list": [ 00:26:20.176 { 00:26:20.176 "name": "BaseBdev1", 00:26:20.176 "uuid": "ae9eb81c-5e5a-4f39-8ef5-00c118ab77b0", 00:26:20.176 "is_configured": true, 00:26:20.176 "data_offset": 256, 00:26:20.176 "data_size": 7936 00:26:20.176 }, 00:26:20.176 { 00:26:20.176 "name": "BaseBdev2", 00:26:20.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:20.176 "is_configured": false, 00:26:20.176 "data_offset": 0, 00:26:20.176 "data_size": 0 00:26:20.177 } 00:26:20.177 ] 00:26:20.177 }' 00:26:20.177 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:20.177 13:26:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:20.747 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:21.005 [2024-07-25 13:26:31.289702] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:21.005 [2024-07-25 13:26:31.289736] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25f0810 name Existed_Raid, state configuring 00:26:21.005 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:21.264 [2024-07-25 13:26:31.506299] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:21.264 [2024-07-25 13:26:31.507673] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:21.264 [2024-07-25 13:26:31.507703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.264 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:21.523 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:21.523 "name": "Existed_Raid", 00:26:21.523 "uuid": "15f42a81-519a-48e1-83c1-6b038ea325e8", 00:26:21.523 "strip_size_kb": 0, 00:26:21.523 "state": "configuring", 00:26:21.523 "raid_level": "raid1", 00:26:21.523 "superblock": true, 00:26:21.523 "num_base_bdevs": 2, 00:26:21.523 "num_base_bdevs_discovered": 1, 00:26:21.523 "num_base_bdevs_operational": 2, 00:26:21.523 "base_bdevs_list": [ 00:26:21.523 { 00:26:21.523 "name": "BaseBdev1", 00:26:21.523 "uuid": "ae9eb81c-5e5a-4f39-8ef5-00c118ab77b0", 00:26:21.523 "is_configured": true, 00:26:21.523 "data_offset": 256, 00:26:21.523 "data_size": 7936 00:26:21.523 }, 00:26:21.523 { 00:26:21.523 "name": "BaseBdev2", 00:26:21.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.523 "is_configured": false, 00:26:21.523 "data_offset": 0, 00:26:21.523 "data_size": 0 00:26:21.523 } 00:26:21.523 ] 00:26:21.523 }' 00:26:21.523 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:21.523 13:26:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:22.091 13:26:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:26:22.091 [2024-07-25 13:26:32.532179] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:22.091 [2024-07-25 13:26:32.532313] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x25f1610 00:26:22.091 [2024-07-25 13:26:32.532325] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:22.091 [2024-07-25 13:26:32.532484] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25dd690 00:26:22.091 [2024-07-25 13:26:32.532595] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25f1610 00:26:22.091 [2024-07-25 13:26:32.532605] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x25f1610 00:26:22.091 [2024-07-25 13:26:32.532687] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:22.091 BaseBdev2 00:26:22.091 13:26:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:22.091 13:26:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:22.091 13:26:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:22.091 13:26:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:26:22.091 13:26:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:22.091 13:26:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:22.091 13:26:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:22.350 13:26:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:22.608 [ 00:26:22.608 { 00:26:22.608 "name": "BaseBdev2", 00:26:22.608 "aliases": [ 00:26:22.608 "d0d571af-5eec-4de4-aa6a-8a05195725a7" 00:26:22.608 ], 00:26:22.608 "product_name": "Malloc disk", 00:26:22.608 "block_size": 4096, 00:26:22.608 "num_blocks": 8192, 00:26:22.608 "uuid": "d0d571af-5eec-4de4-aa6a-8a05195725a7", 00:26:22.608 "assigned_rate_limits": { 00:26:22.608 "rw_ios_per_sec": 0, 00:26:22.608 "rw_mbytes_per_sec": 0, 00:26:22.608 "r_mbytes_per_sec": 0, 00:26:22.608 "w_mbytes_per_sec": 0 00:26:22.608 }, 00:26:22.608 "claimed": true, 00:26:22.608 "claim_type": "exclusive_write", 00:26:22.608 "zoned": false, 00:26:22.608 "supported_io_types": { 00:26:22.608 "read": true, 00:26:22.608 "write": true, 00:26:22.608 "unmap": true, 00:26:22.608 "flush": true, 00:26:22.608 "reset": true, 00:26:22.608 "nvme_admin": false, 00:26:22.608 "nvme_io": false, 00:26:22.608 "nvme_io_md": false, 00:26:22.608 "write_zeroes": true, 00:26:22.608 "zcopy": true, 00:26:22.608 "get_zone_info": false, 00:26:22.608 "zone_management": false, 00:26:22.608 "zone_append": false, 00:26:22.608 "compare": false, 00:26:22.608 "compare_and_write": false, 00:26:22.608 "abort": true, 00:26:22.608 "seek_hole": false, 00:26:22.608 "seek_data": false, 00:26:22.608 "copy": true, 00:26:22.608 "nvme_iov_md": false 00:26:22.608 }, 00:26:22.608 "memory_domains": [ 00:26:22.608 { 00:26:22.608 "dma_device_id": "system", 00:26:22.608 "dma_device_type": 1 00:26:22.608 }, 00:26:22.608 { 00:26:22.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.608 "dma_device_type": 2 00:26:22.608 } 00:26:22.608 ], 00:26:22.608 "driver_specific": {} 00:26:22.608 } 00:26:22.608 ] 00:26:22.608 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:26:22.608 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.609 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:22.867 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:22.867 "name": "Existed_Raid", 00:26:22.867 "uuid": "15f42a81-519a-48e1-83c1-6b038ea325e8", 00:26:22.867 "strip_size_kb": 0, 00:26:22.867 "state": "online", 00:26:22.867 "raid_level": "raid1", 00:26:22.867 "superblock": true, 00:26:22.867 "num_base_bdevs": 2, 00:26:22.867 "num_base_bdevs_discovered": 2, 00:26:22.867 "num_base_bdevs_operational": 2, 00:26:22.867 "base_bdevs_list": [ 00:26:22.867 { 00:26:22.867 "name": "BaseBdev1", 00:26:22.867 "uuid": "ae9eb81c-5e5a-4f39-8ef5-00c118ab77b0", 00:26:22.867 "is_configured": true, 00:26:22.867 "data_offset": 256, 00:26:22.867 "data_size": 7936 00:26:22.867 }, 00:26:22.867 { 00:26:22.867 "name": "BaseBdev2", 00:26:22.867 "uuid": "d0d571af-5eec-4de4-aa6a-8a05195725a7", 00:26:22.867 "is_configured": true, 00:26:22.867 "data_offset": 256, 00:26:22.867 "data_size": 7936 00:26:22.867 } 00:26:22.867 ] 00:26:22.867 }' 00:26:22.867 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:22.867 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:23.434 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:23.434 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:23.434 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:23.434 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:23.434 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:23.434 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:26:23.434 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:23.434 13:26:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:23.693 [2024-07-25 13:26:34.020329] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:23.693 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:23.693 "name": "Existed_Raid", 00:26:23.693 "aliases": [ 00:26:23.693 "15f42a81-519a-48e1-83c1-6b038ea325e8" 00:26:23.693 ], 00:26:23.693 "product_name": "Raid Volume", 00:26:23.693 "block_size": 4096, 00:26:23.693 "num_blocks": 7936, 00:26:23.693 "uuid": "15f42a81-519a-48e1-83c1-6b038ea325e8", 00:26:23.693 "assigned_rate_limits": { 00:26:23.693 "rw_ios_per_sec": 0, 00:26:23.693 "rw_mbytes_per_sec": 0, 00:26:23.693 "r_mbytes_per_sec": 0, 00:26:23.693 "w_mbytes_per_sec": 0 00:26:23.693 }, 00:26:23.693 "claimed": false, 00:26:23.693 "zoned": false, 00:26:23.693 "supported_io_types": { 00:26:23.693 "read": true, 00:26:23.693 "write": true, 00:26:23.693 "unmap": false, 00:26:23.694 "flush": false, 00:26:23.694 "reset": true, 00:26:23.694 "nvme_admin": false, 00:26:23.694 "nvme_io": false, 00:26:23.694 "nvme_io_md": false, 00:26:23.694 "write_zeroes": true, 00:26:23.694 "zcopy": false, 00:26:23.694 "get_zone_info": false, 00:26:23.694 "zone_management": false, 00:26:23.694 "zone_append": false, 00:26:23.694 "compare": false, 00:26:23.694 "compare_and_write": false, 00:26:23.694 "abort": false, 00:26:23.694 "seek_hole": false, 00:26:23.694 "seek_data": false, 00:26:23.694 "copy": false, 00:26:23.694 "nvme_iov_md": false 00:26:23.694 }, 00:26:23.694 "memory_domains": [ 00:26:23.694 { 00:26:23.694 "dma_device_id": "system", 00:26:23.694 "dma_device_type": 1 00:26:23.694 }, 00:26:23.694 { 00:26:23.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.694 "dma_device_type": 2 00:26:23.694 }, 00:26:23.694 { 00:26:23.694 "dma_device_id": "system", 00:26:23.694 "dma_device_type": 1 00:26:23.694 }, 00:26:23.694 { 00:26:23.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.694 "dma_device_type": 2 00:26:23.694 } 00:26:23.694 ], 00:26:23.694 "driver_specific": { 00:26:23.694 "raid": { 00:26:23.694 "uuid": "15f42a81-519a-48e1-83c1-6b038ea325e8", 00:26:23.694 "strip_size_kb": 0, 00:26:23.694 "state": "online", 00:26:23.694 "raid_level": "raid1", 00:26:23.694 "superblock": true, 00:26:23.694 "num_base_bdevs": 2, 00:26:23.694 "num_base_bdevs_discovered": 2, 00:26:23.694 "num_base_bdevs_operational": 2, 00:26:23.694 "base_bdevs_list": [ 00:26:23.694 { 00:26:23.694 "name": "BaseBdev1", 00:26:23.694 "uuid": "ae9eb81c-5e5a-4f39-8ef5-00c118ab77b0", 00:26:23.694 "is_configured": true, 00:26:23.694 "data_offset": 256, 00:26:23.694 "data_size": 7936 00:26:23.694 }, 00:26:23.694 { 00:26:23.694 "name": "BaseBdev2", 00:26:23.694 "uuid": "d0d571af-5eec-4de4-aa6a-8a05195725a7", 00:26:23.694 "is_configured": true, 00:26:23.694 "data_offset": 256, 00:26:23.694 "data_size": 7936 00:26:23.694 } 00:26:23.694 ] 00:26:23.694 } 00:26:23.694 } 00:26:23.694 }' 00:26:23.694 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:23.694 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:23.694 BaseBdev2' 00:26:23.694 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:23.694 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:23.694 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:23.953 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:23.953 "name": "BaseBdev1", 00:26:23.953 "aliases": [ 00:26:23.953 "ae9eb81c-5e5a-4f39-8ef5-00c118ab77b0" 00:26:23.953 ], 00:26:23.953 "product_name": "Malloc disk", 00:26:23.953 "block_size": 4096, 00:26:23.953 "num_blocks": 8192, 00:26:23.953 "uuid": "ae9eb81c-5e5a-4f39-8ef5-00c118ab77b0", 00:26:23.953 "assigned_rate_limits": { 00:26:23.953 "rw_ios_per_sec": 0, 00:26:23.953 "rw_mbytes_per_sec": 0, 00:26:23.953 "r_mbytes_per_sec": 0, 00:26:23.953 "w_mbytes_per_sec": 0 00:26:23.953 }, 00:26:23.953 "claimed": true, 00:26:23.953 "claim_type": "exclusive_write", 00:26:23.953 "zoned": false, 00:26:23.953 "supported_io_types": { 00:26:23.953 "read": true, 00:26:23.953 "write": true, 00:26:23.953 "unmap": true, 00:26:23.953 "flush": true, 00:26:23.953 "reset": true, 00:26:23.953 "nvme_admin": false, 00:26:23.953 "nvme_io": false, 00:26:23.953 "nvme_io_md": false, 00:26:23.953 "write_zeroes": true, 00:26:23.953 "zcopy": true, 00:26:23.953 "get_zone_info": false, 00:26:23.953 "zone_management": false, 00:26:23.953 "zone_append": false, 00:26:23.953 "compare": false, 00:26:23.953 "compare_and_write": false, 00:26:23.953 "abort": true, 00:26:23.953 "seek_hole": false, 00:26:23.953 "seek_data": false, 00:26:23.953 "copy": true, 00:26:23.953 "nvme_iov_md": false 00:26:23.953 }, 00:26:23.953 "memory_domains": [ 00:26:23.953 { 00:26:23.953 "dma_device_id": "system", 00:26:23.953 "dma_device_type": 1 00:26:23.953 }, 00:26:23.953 { 00:26:23.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.953 "dma_device_type": 2 00:26:23.953 } 00:26:23.953 ], 00:26:23.953 "driver_specific": {} 00:26:23.953 }' 00:26:23.953 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:23.953 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:23.953 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:23.953 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:24.212 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:24.212 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:24.212 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:24.212 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:24.212 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:24.212 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.212 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.212 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:24.212 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:24.212 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:24.212 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:24.471 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:24.471 "name": "BaseBdev2", 00:26:24.471 "aliases": [ 00:26:24.471 "d0d571af-5eec-4de4-aa6a-8a05195725a7" 00:26:24.471 ], 00:26:24.471 "product_name": "Malloc disk", 00:26:24.471 "block_size": 4096, 00:26:24.471 "num_blocks": 8192, 00:26:24.471 "uuid": "d0d571af-5eec-4de4-aa6a-8a05195725a7", 00:26:24.471 "assigned_rate_limits": { 00:26:24.471 "rw_ios_per_sec": 0, 00:26:24.471 "rw_mbytes_per_sec": 0, 00:26:24.471 "r_mbytes_per_sec": 0, 00:26:24.471 "w_mbytes_per_sec": 0 00:26:24.471 }, 00:26:24.471 "claimed": true, 00:26:24.471 "claim_type": "exclusive_write", 00:26:24.471 "zoned": false, 00:26:24.471 "supported_io_types": { 00:26:24.471 "read": true, 00:26:24.471 "write": true, 00:26:24.471 "unmap": true, 00:26:24.471 "flush": true, 00:26:24.471 "reset": true, 00:26:24.471 "nvme_admin": false, 00:26:24.471 "nvme_io": false, 00:26:24.471 "nvme_io_md": false, 00:26:24.471 "write_zeroes": true, 00:26:24.471 "zcopy": true, 00:26:24.471 "get_zone_info": false, 00:26:24.471 "zone_management": false, 00:26:24.471 "zone_append": false, 00:26:24.471 "compare": false, 00:26:24.471 "compare_and_write": false, 00:26:24.471 "abort": true, 00:26:24.471 "seek_hole": false, 00:26:24.471 "seek_data": false, 00:26:24.471 "copy": true, 00:26:24.471 "nvme_iov_md": false 00:26:24.471 }, 00:26:24.471 "memory_domains": [ 00:26:24.471 { 00:26:24.471 "dma_device_id": "system", 00:26:24.471 "dma_device_type": 1 00:26:24.471 }, 00:26:24.471 { 00:26:24.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.471 "dma_device_type": 2 00:26:24.471 } 00:26:24.471 ], 00:26:24.471 "driver_specific": {} 00:26:24.471 }' 00:26:24.471 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:24.471 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:24.730 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:24.730 13:26:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:24.730 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:24.730 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:24.730 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:24.730 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:24.730 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:24.730 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.730 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.730 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:24.730 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:24.989 [2024-07-25 13:26:35.415819] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.989 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.248 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.248 "name": "Existed_Raid", 00:26:25.248 "uuid": "15f42a81-519a-48e1-83c1-6b038ea325e8", 00:26:25.248 "strip_size_kb": 0, 00:26:25.248 "state": "online", 00:26:25.248 "raid_level": "raid1", 00:26:25.248 "superblock": true, 00:26:25.248 "num_base_bdevs": 2, 00:26:25.248 "num_base_bdevs_discovered": 1, 00:26:25.248 "num_base_bdevs_operational": 1, 00:26:25.248 "base_bdevs_list": [ 00:26:25.248 { 00:26:25.248 "name": null, 00:26:25.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.248 "is_configured": false, 00:26:25.248 "data_offset": 256, 00:26:25.248 "data_size": 7936 00:26:25.248 }, 00:26:25.248 { 00:26:25.248 "name": "BaseBdev2", 00:26:25.248 "uuid": "d0d571af-5eec-4de4-aa6a-8a05195725a7", 00:26:25.248 "is_configured": true, 00:26:25.248 "data_offset": 256, 00:26:25.248 "data_size": 7936 00:26:25.248 } 00:26:25.248 ] 00:26:25.248 }' 00:26:25.248 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.248 13:26:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:25.816 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:25.816 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:25.816 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.816 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:26.075 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:26.075 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:26.075 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:26.333 [2024-07-25 13:26:36.652035] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:26.333 [2024-07-25 13:26:36.652106] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:26.333 [2024-07-25 13:26:36.662480] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:26.333 [2024-07-25 13:26:36.662510] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:26.333 [2024-07-25 13:26:36.662521] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25f1610 name Existed_Raid, state offline 00:26:26.333 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:26.333 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:26.333 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.333 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 994309 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 994309 ']' 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 994309 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 994309 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 994309' 00:26:26.593 killing process with pid 994309 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 994309 00:26:26.593 [2024-07-25 13:26:36.966963] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:26.593 13:26:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 994309 00:26:26.593 [2024-07-25 13:26:36.967819] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:26.852 13:26:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:26:26.852 00:26:26.852 real 0m10.438s 00:26:26.852 user 0m18.666s 00:26:26.852 sys 0m1.884s 00:26:26.852 13:26:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:26.852 13:26:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:26.852 ************************************ 00:26:26.852 END TEST raid_state_function_test_sb_4k 00:26:26.852 ************************************ 00:26:26.852 13:26:37 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:26:26.852 13:26:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:26.852 13:26:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:26.852 13:26:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:26.852 ************************************ 00:26:26.852 START TEST raid_superblock_test_4k 00:26:26.852 ************************************ 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@414 -- # local strip_size 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@427 -- # raid_pid=996153 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@428 -- # waitforlisten 996153 /var/tmp/spdk-raid.sock 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 996153 ']' 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:26.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:26.852 13:26:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:26.852 [2024-07-25 13:26:37.304116] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:26:26.852 [2024-07-25 13:26:37.304178] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996153 ] 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:27.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:27.111 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:27.111 [2024-07-25 13:26:37.437194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.111 [2024-07-25 13:26:37.524292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.111 [2024-07-25 13:26:37.587841] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:27.111 [2024-07-25 13:26:37.587881] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:28.046 13:26:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:28.046 13:26:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:26:28.046 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:26:28.046 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:28.046 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:26:28.046 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:26:28.046 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:28.046 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:28.046 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:28.046 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:28.046 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:26:28.046 malloc1 00:26:28.046 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:28.305 [2024-07-25 13:26:38.645540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:28.305 [2024-07-25 13:26:38.645583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:28.305 [2024-07-25 13:26:38.645601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc812f0 00:26:28.305 [2024-07-25 13:26:38.645612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:28.305 [2024-07-25 13:26:38.647118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:28.305 [2024-07-25 13:26:38.647152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:28.305 pt1 00:26:28.305 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:28.305 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:28.305 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:26:28.305 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:26:28.305 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:28.305 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:28.305 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:28.305 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:28.305 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:26:28.564 malloc2 00:26:28.564 13:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:28.823 [2024-07-25 13:26:39.095137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:28.823 [2024-07-25 13:26:39.095185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:28.823 [2024-07-25 13:26:39.095200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe18f70 00:26:28.823 [2024-07-25 13:26:39.095211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:28.823 [2024-07-25 13:26:39.096599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:28.823 [2024-07-25 13:26:39.096625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:28.823 pt2 00:26:28.823 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:28.823 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:28.823 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:26:29.082 [2024-07-25 13:26:39.319742] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:29.082 [2024-07-25 13:26:39.320911] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:29.082 [2024-07-25 13:26:39.321035] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xe1b760 00:26:29.082 [2024-07-25 13:26:39.321047] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:29.082 [2024-07-25 13:26:39.321239] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe1e5b0 00:26:29.082 [2024-07-25 13:26:39.321363] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe1b760 00:26:29.082 [2024-07-25 13:26:39.321377] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe1b760 00:26:29.082 [2024-07-25 13:26:39.321480] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:29.082 "name": "raid_bdev1", 00:26:29.082 "uuid": "b462c4bf-61ea-49a1-8da1-8eaacef12273", 00:26:29.082 "strip_size_kb": 0, 00:26:29.082 "state": "online", 00:26:29.082 "raid_level": "raid1", 00:26:29.082 "superblock": true, 00:26:29.082 "num_base_bdevs": 2, 00:26:29.082 "num_base_bdevs_discovered": 2, 00:26:29.082 "num_base_bdevs_operational": 2, 00:26:29.082 "base_bdevs_list": [ 00:26:29.082 { 00:26:29.082 "name": "pt1", 00:26:29.082 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:29.082 "is_configured": true, 00:26:29.082 "data_offset": 256, 00:26:29.082 "data_size": 7936 00:26:29.082 }, 00:26:29.082 { 00:26:29.082 "name": "pt2", 00:26:29.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:29.082 "is_configured": true, 00:26:29.082 "data_offset": 256, 00:26:29.082 "data_size": 7936 00:26:29.082 } 00:26:29.082 ] 00:26:29.082 }' 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:29.082 13:26:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:29.649 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:26:29.649 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:29.649 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:29.649 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:29.649 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:29.649 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:26:29.907 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:29.907 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:29.907 [2024-07-25 13:26:40.346777] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:29.907 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:29.907 "name": "raid_bdev1", 00:26:29.907 "aliases": [ 00:26:29.907 "b462c4bf-61ea-49a1-8da1-8eaacef12273" 00:26:29.907 ], 00:26:29.907 "product_name": "Raid Volume", 00:26:29.907 "block_size": 4096, 00:26:29.907 "num_blocks": 7936, 00:26:29.907 "uuid": "b462c4bf-61ea-49a1-8da1-8eaacef12273", 00:26:29.907 "assigned_rate_limits": { 00:26:29.907 "rw_ios_per_sec": 0, 00:26:29.907 "rw_mbytes_per_sec": 0, 00:26:29.907 "r_mbytes_per_sec": 0, 00:26:29.907 "w_mbytes_per_sec": 0 00:26:29.907 }, 00:26:29.907 "claimed": false, 00:26:29.907 "zoned": false, 00:26:29.907 "supported_io_types": { 00:26:29.907 "read": true, 00:26:29.907 "write": true, 00:26:29.907 "unmap": false, 00:26:29.907 "flush": false, 00:26:29.907 "reset": true, 00:26:29.907 "nvme_admin": false, 00:26:29.907 "nvme_io": false, 00:26:29.907 "nvme_io_md": false, 00:26:29.907 "write_zeroes": true, 00:26:29.907 "zcopy": false, 00:26:29.907 "get_zone_info": false, 00:26:29.907 "zone_management": false, 00:26:29.907 "zone_append": false, 00:26:29.907 "compare": false, 00:26:29.907 "compare_and_write": false, 00:26:29.907 "abort": false, 00:26:29.907 "seek_hole": false, 00:26:29.907 "seek_data": false, 00:26:29.907 "copy": false, 00:26:29.907 "nvme_iov_md": false 00:26:29.907 }, 00:26:29.907 "memory_domains": [ 00:26:29.907 { 00:26:29.907 "dma_device_id": "system", 00:26:29.907 "dma_device_type": 1 00:26:29.907 }, 00:26:29.907 { 00:26:29.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:29.907 "dma_device_type": 2 00:26:29.907 }, 00:26:29.907 { 00:26:29.907 "dma_device_id": "system", 00:26:29.907 "dma_device_type": 1 00:26:29.907 }, 00:26:29.907 { 00:26:29.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:29.907 "dma_device_type": 2 00:26:29.907 } 00:26:29.907 ], 00:26:29.907 "driver_specific": { 00:26:29.907 "raid": { 00:26:29.907 "uuid": "b462c4bf-61ea-49a1-8da1-8eaacef12273", 00:26:29.907 "strip_size_kb": 0, 00:26:29.907 "state": "online", 00:26:29.907 "raid_level": "raid1", 00:26:29.907 "superblock": true, 00:26:29.907 "num_base_bdevs": 2, 00:26:29.907 "num_base_bdevs_discovered": 2, 00:26:29.907 "num_base_bdevs_operational": 2, 00:26:29.907 "base_bdevs_list": [ 00:26:29.907 { 00:26:29.907 "name": "pt1", 00:26:29.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:29.907 "is_configured": true, 00:26:29.907 "data_offset": 256, 00:26:29.907 "data_size": 7936 00:26:29.907 }, 00:26:29.907 { 00:26:29.907 "name": "pt2", 00:26:29.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:29.907 "is_configured": true, 00:26:29.907 "data_offset": 256, 00:26:29.907 "data_size": 7936 00:26:29.907 } 00:26:29.907 ] 00:26:29.907 } 00:26:29.907 } 00:26:29.907 }' 00:26:29.907 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:30.166 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:30.166 pt2' 00:26:30.166 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:30.166 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:30.166 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:30.166 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:30.166 "name": "pt1", 00:26:30.166 "aliases": [ 00:26:30.166 "00000000-0000-0000-0000-000000000001" 00:26:30.166 ], 00:26:30.166 "product_name": "passthru", 00:26:30.166 "block_size": 4096, 00:26:30.166 "num_blocks": 8192, 00:26:30.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:30.166 "assigned_rate_limits": { 00:26:30.166 "rw_ios_per_sec": 0, 00:26:30.166 "rw_mbytes_per_sec": 0, 00:26:30.166 "r_mbytes_per_sec": 0, 00:26:30.166 "w_mbytes_per_sec": 0 00:26:30.166 }, 00:26:30.166 "claimed": true, 00:26:30.166 "claim_type": "exclusive_write", 00:26:30.166 "zoned": false, 00:26:30.166 "supported_io_types": { 00:26:30.166 "read": true, 00:26:30.166 "write": true, 00:26:30.166 "unmap": true, 00:26:30.166 "flush": true, 00:26:30.166 "reset": true, 00:26:30.166 "nvme_admin": false, 00:26:30.166 "nvme_io": false, 00:26:30.166 "nvme_io_md": false, 00:26:30.166 "write_zeroes": true, 00:26:30.166 "zcopy": true, 00:26:30.166 "get_zone_info": false, 00:26:30.166 "zone_management": false, 00:26:30.166 "zone_append": false, 00:26:30.166 "compare": false, 00:26:30.166 "compare_and_write": false, 00:26:30.166 "abort": true, 00:26:30.166 "seek_hole": false, 00:26:30.166 "seek_data": false, 00:26:30.166 "copy": true, 00:26:30.166 "nvme_iov_md": false 00:26:30.166 }, 00:26:30.166 "memory_domains": [ 00:26:30.166 { 00:26:30.166 "dma_device_id": "system", 00:26:30.166 "dma_device_type": 1 00:26:30.166 }, 00:26:30.166 { 00:26:30.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:30.166 "dma_device_type": 2 00:26:30.166 } 00:26:30.166 ], 00:26:30.166 "driver_specific": { 00:26:30.166 "passthru": { 00:26:30.166 "name": "pt1", 00:26:30.166 "base_bdev_name": "malloc1" 00:26:30.166 } 00:26:30.166 } 00:26:30.166 }' 00:26:30.166 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:30.424 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:30.424 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:30.424 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:30.424 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:30.424 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:30.424 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:30.424 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:30.424 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:30.424 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:30.682 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:30.682 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:30.682 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:30.682 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:30.682 13:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:30.941 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:30.941 "name": "pt2", 00:26:30.941 "aliases": [ 00:26:30.941 "00000000-0000-0000-0000-000000000002" 00:26:30.941 ], 00:26:30.941 "product_name": "passthru", 00:26:30.941 "block_size": 4096, 00:26:30.941 "num_blocks": 8192, 00:26:30.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:30.941 "assigned_rate_limits": { 00:26:30.941 "rw_ios_per_sec": 0, 00:26:30.941 "rw_mbytes_per_sec": 0, 00:26:30.941 "r_mbytes_per_sec": 0, 00:26:30.941 "w_mbytes_per_sec": 0 00:26:30.941 }, 00:26:30.941 "claimed": true, 00:26:30.941 "claim_type": "exclusive_write", 00:26:30.941 "zoned": false, 00:26:30.941 "supported_io_types": { 00:26:30.941 "read": true, 00:26:30.941 "write": true, 00:26:30.941 "unmap": true, 00:26:30.941 "flush": true, 00:26:30.941 "reset": true, 00:26:30.941 "nvme_admin": false, 00:26:30.941 "nvme_io": false, 00:26:30.941 "nvme_io_md": false, 00:26:30.941 "write_zeroes": true, 00:26:30.941 "zcopy": true, 00:26:30.941 "get_zone_info": false, 00:26:30.941 "zone_management": false, 00:26:30.941 "zone_append": false, 00:26:30.941 "compare": false, 00:26:30.941 "compare_and_write": false, 00:26:30.941 "abort": true, 00:26:30.941 "seek_hole": false, 00:26:30.941 "seek_data": false, 00:26:30.941 "copy": true, 00:26:30.941 "nvme_iov_md": false 00:26:30.941 }, 00:26:30.941 "memory_domains": [ 00:26:30.941 { 00:26:30.941 "dma_device_id": "system", 00:26:30.941 "dma_device_type": 1 00:26:30.941 }, 00:26:30.941 { 00:26:30.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:30.941 "dma_device_type": 2 00:26:30.941 } 00:26:30.941 ], 00:26:30.941 "driver_specific": { 00:26:30.941 "passthru": { 00:26:30.941 "name": "pt2", 00:26:30.941 "base_bdev_name": "malloc2" 00:26:30.941 } 00:26:30.941 } 00:26:30.941 }' 00:26:30.941 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:30.941 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:30.941 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:30.941 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:30.941 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:30.941 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:30.941 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:31.199 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:31.199 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:31.199 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:31.200 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:31.200 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:31.200 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:31.200 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:26:31.458 [2024-07-25 13:26:41.778537] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:31.458 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=b462c4bf-61ea-49a1-8da1-8eaacef12273 00:26:31.458 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' -z b462c4bf-61ea-49a1-8da1-8eaacef12273 ']' 00:26:31.458 13:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:31.716 [2024-07-25 13:26:42.002903] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:31.716 [2024-07-25 13:26:42.002918] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:31.716 [2024-07-25 13:26:42.002967] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:31.716 [2024-07-25 13:26:42.003015] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:31.716 [2024-07-25 13:26:42.003026] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe1b760 name raid_bdev1, state offline 00:26:31.716 13:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.716 13:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:26:31.975 13:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:26:31.975 13:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:26:31.975 13:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:31.975 13:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:32.233 13:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:32.233 13:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:32.233 13:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:32.233 13:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:32.492 13:26:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:32.751 [2024-07-25 13:26:43.129824] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:32.751 [2024-07-25 13:26:43.131066] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:32.751 [2024-07-25 13:26:43.131117] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:32.751 [2024-07-25 13:26:43.131168] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:32.751 [2024-07-25 13:26:43.131186] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:32.751 [2024-07-25 13:26:43.131195] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe1b9f0 name raid_bdev1, state configuring 00:26:32.751 request: 00:26:32.751 { 00:26:32.751 "name": "raid_bdev1", 00:26:32.751 "raid_level": "raid1", 00:26:32.751 "base_bdevs": [ 00:26:32.751 "malloc1", 00:26:32.751 "malloc2" 00:26:32.751 ], 00:26:32.751 "superblock": false, 00:26:32.751 "method": "bdev_raid_create", 00:26:32.752 "req_id": 1 00:26:32.752 } 00:26:32.752 Got JSON-RPC error response 00:26:32.752 response: 00:26:32.752 { 00:26:32.752 "code": -17, 00:26:32.752 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:32.752 } 00:26:32.752 13:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:26:32.752 13:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:32.752 13:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:32.752 13:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:32.752 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.752 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:26:33.028 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:26:33.028 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:26:33.028 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:33.287 [2024-07-25 13:26:43.591130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:33.287 [2024-07-25 13:26:43.591185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.287 [2024-07-25 13:26:43.591205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe24bf0 00:26:33.287 [2024-07-25 13:26:43.591219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.287 [2024-07-25 13:26:43.592697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.287 [2024-07-25 13:26:43.592727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:33.287 [2024-07-25 13:26:43.592789] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:33.287 [2024-07-25 13:26:43.592813] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:33.287 pt1 00:26:33.287 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:33.287 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:33.287 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:33.287 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:33.287 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:33.287 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:33.287 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:33.287 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:33.287 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:33.287 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:33.287 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.287 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.545 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:33.545 "name": "raid_bdev1", 00:26:33.545 "uuid": "b462c4bf-61ea-49a1-8da1-8eaacef12273", 00:26:33.545 "strip_size_kb": 0, 00:26:33.545 "state": "configuring", 00:26:33.545 "raid_level": "raid1", 00:26:33.545 "superblock": true, 00:26:33.545 "num_base_bdevs": 2, 00:26:33.545 "num_base_bdevs_discovered": 1, 00:26:33.545 "num_base_bdevs_operational": 2, 00:26:33.545 "base_bdevs_list": [ 00:26:33.545 { 00:26:33.545 "name": "pt1", 00:26:33.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:33.545 "is_configured": true, 00:26:33.545 "data_offset": 256, 00:26:33.545 "data_size": 7936 00:26:33.545 }, 00:26:33.545 { 00:26:33.545 "name": null, 00:26:33.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:33.545 "is_configured": false, 00:26:33.545 "data_offset": 256, 00:26:33.545 "data_size": 7936 00:26:33.545 } 00:26:33.545 ] 00:26:33.545 }' 00:26:33.545 13:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:33.545 13:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:34.112 [2024-07-25 13:26:44.569728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:34.112 [2024-07-25 13:26:44.569773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.112 [2024-07-25 13:26:44.569789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe1bb10 00:26:34.112 [2024-07-25 13:26:44.569801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.112 [2024-07-25 13:26:44.570105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.112 [2024-07-25 13:26:44.570121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:34.112 [2024-07-25 13:26:44.570186] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:34.112 [2024-07-25 13:26:44.570204] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:34.112 [2024-07-25 13:26:44.570292] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc7fc30 00:26:34.112 [2024-07-25 13:26:44.570301] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:34.112 [2024-07-25 13:26:44.570455] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe1a5d0 00:26:34.112 [2024-07-25 13:26:44.570568] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc7fc30 00:26:34.112 [2024-07-25 13:26:44.570577] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc7fc30 00:26:34.112 [2024-07-25 13:26:44.570664] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:34.112 pt2 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.112 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.370 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:34.370 "name": "raid_bdev1", 00:26:34.370 "uuid": "b462c4bf-61ea-49a1-8da1-8eaacef12273", 00:26:34.370 "strip_size_kb": 0, 00:26:34.370 "state": "online", 00:26:34.370 "raid_level": "raid1", 00:26:34.370 "superblock": true, 00:26:34.370 "num_base_bdevs": 2, 00:26:34.370 "num_base_bdevs_discovered": 2, 00:26:34.370 "num_base_bdevs_operational": 2, 00:26:34.370 "base_bdevs_list": [ 00:26:34.370 { 00:26:34.371 "name": "pt1", 00:26:34.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:34.371 "is_configured": true, 00:26:34.371 "data_offset": 256, 00:26:34.371 "data_size": 7936 00:26:34.371 }, 00:26:34.371 { 00:26:34.371 "name": "pt2", 00:26:34.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:34.371 "is_configured": true, 00:26:34.371 "data_offset": 256, 00:26:34.371 "data_size": 7936 00:26:34.371 } 00:26:34.371 ] 00:26:34.371 }' 00:26:34.371 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:34.371 13:26:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:34.937 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:26:34.937 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:34.937 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:34.937 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:34.937 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:34.937 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:26:34.937 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:34.937 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:35.196 [2024-07-25 13:26:45.516451] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:35.196 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:35.196 "name": "raid_bdev1", 00:26:35.196 "aliases": [ 00:26:35.196 "b462c4bf-61ea-49a1-8da1-8eaacef12273" 00:26:35.196 ], 00:26:35.196 "product_name": "Raid Volume", 00:26:35.196 "block_size": 4096, 00:26:35.196 "num_blocks": 7936, 00:26:35.196 "uuid": "b462c4bf-61ea-49a1-8da1-8eaacef12273", 00:26:35.196 "assigned_rate_limits": { 00:26:35.196 "rw_ios_per_sec": 0, 00:26:35.196 "rw_mbytes_per_sec": 0, 00:26:35.196 "r_mbytes_per_sec": 0, 00:26:35.196 "w_mbytes_per_sec": 0 00:26:35.196 }, 00:26:35.196 "claimed": false, 00:26:35.196 "zoned": false, 00:26:35.196 "supported_io_types": { 00:26:35.196 "read": true, 00:26:35.196 "write": true, 00:26:35.196 "unmap": false, 00:26:35.196 "flush": false, 00:26:35.196 "reset": true, 00:26:35.196 "nvme_admin": false, 00:26:35.196 "nvme_io": false, 00:26:35.196 "nvme_io_md": false, 00:26:35.196 "write_zeroes": true, 00:26:35.196 "zcopy": false, 00:26:35.196 "get_zone_info": false, 00:26:35.196 "zone_management": false, 00:26:35.196 "zone_append": false, 00:26:35.196 "compare": false, 00:26:35.196 "compare_and_write": false, 00:26:35.196 "abort": false, 00:26:35.196 "seek_hole": false, 00:26:35.196 "seek_data": false, 00:26:35.196 "copy": false, 00:26:35.196 "nvme_iov_md": false 00:26:35.196 }, 00:26:35.196 "memory_domains": [ 00:26:35.196 { 00:26:35.196 "dma_device_id": "system", 00:26:35.196 "dma_device_type": 1 00:26:35.196 }, 00:26:35.196 { 00:26:35.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.196 "dma_device_type": 2 00:26:35.196 }, 00:26:35.196 { 00:26:35.196 "dma_device_id": "system", 00:26:35.196 "dma_device_type": 1 00:26:35.196 }, 00:26:35.196 { 00:26:35.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.196 "dma_device_type": 2 00:26:35.196 } 00:26:35.196 ], 00:26:35.196 "driver_specific": { 00:26:35.196 "raid": { 00:26:35.196 "uuid": "b462c4bf-61ea-49a1-8da1-8eaacef12273", 00:26:35.196 "strip_size_kb": 0, 00:26:35.196 "state": "online", 00:26:35.196 "raid_level": "raid1", 00:26:35.196 "superblock": true, 00:26:35.196 "num_base_bdevs": 2, 00:26:35.196 "num_base_bdevs_discovered": 2, 00:26:35.196 "num_base_bdevs_operational": 2, 00:26:35.196 "base_bdevs_list": [ 00:26:35.196 { 00:26:35.196 "name": "pt1", 00:26:35.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:35.196 "is_configured": true, 00:26:35.196 "data_offset": 256, 00:26:35.196 "data_size": 7936 00:26:35.196 }, 00:26:35.196 { 00:26:35.196 "name": "pt2", 00:26:35.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:35.196 "is_configured": true, 00:26:35.196 "data_offset": 256, 00:26:35.196 "data_size": 7936 00:26:35.196 } 00:26:35.196 ] 00:26:35.196 } 00:26:35.196 } 00:26:35.196 }' 00:26:35.196 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:35.196 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:35.196 pt2' 00:26:35.196 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:35.196 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:35.196 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:35.455 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:35.455 "name": "pt1", 00:26:35.455 "aliases": [ 00:26:35.455 "00000000-0000-0000-0000-000000000001" 00:26:35.455 ], 00:26:35.455 "product_name": "passthru", 00:26:35.455 "block_size": 4096, 00:26:35.455 "num_blocks": 8192, 00:26:35.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:35.455 "assigned_rate_limits": { 00:26:35.455 "rw_ios_per_sec": 0, 00:26:35.455 "rw_mbytes_per_sec": 0, 00:26:35.455 "r_mbytes_per_sec": 0, 00:26:35.455 "w_mbytes_per_sec": 0 00:26:35.455 }, 00:26:35.455 "claimed": true, 00:26:35.455 "claim_type": "exclusive_write", 00:26:35.455 "zoned": false, 00:26:35.455 "supported_io_types": { 00:26:35.455 "read": true, 00:26:35.455 "write": true, 00:26:35.455 "unmap": true, 00:26:35.455 "flush": true, 00:26:35.455 "reset": true, 00:26:35.455 "nvme_admin": false, 00:26:35.455 "nvme_io": false, 00:26:35.455 "nvme_io_md": false, 00:26:35.455 "write_zeroes": true, 00:26:35.455 "zcopy": true, 00:26:35.455 "get_zone_info": false, 00:26:35.455 "zone_management": false, 00:26:35.455 "zone_append": false, 00:26:35.455 "compare": false, 00:26:35.455 "compare_and_write": false, 00:26:35.455 "abort": true, 00:26:35.455 "seek_hole": false, 00:26:35.455 "seek_data": false, 00:26:35.455 "copy": true, 00:26:35.455 "nvme_iov_md": false 00:26:35.455 }, 00:26:35.455 "memory_domains": [ 00:26:35.455 { 00:26:35.455 "dma_device_id": "system", 00:26:35.455 "dma_device_type": 1 00:26:35.455 }, 00:26:35.455 { 00:26:35.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.455 "dma_device_type": 2 00:26:35.455 } 00:26:35.455 ], 00:26:35.455 "driver_specific": { 00:26:35.455 "passthru": { 00:26:35.455 "name": "pt1", 00:26:35.455 "base_bdev_name": "malloc1" 00:26:35.455 } 00:26:35.455 } 00:26:35.455 }' 00:26:35.455 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:35.455 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:35.455 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:35.455 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:35.455 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:35.713 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:35.713 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:35.713 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:35.713 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:35.713 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:35.713 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:35.713 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:35.713 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:35.713 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:35.713 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:35.971 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:35.971 "name": "pt2", 00:26:35.971 "aliases": [ 00:26:35.971 "00000000-0000-0000-0000-000000000002" 00:26:35.971 ], 00:26:35.971 "product_name": "passthru", 00:26:35.971 "block_size": 4096, 00:26:35.971 "num_blocks": 8192, 00:26:35.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:35.971 "assigned_rate_limits": { 00:26:35.971 "rw_ios_per_sec": 0, 00:26:35.971 "rw_mbytes_per_sec": 0, 00:26:35.971 "r_mbytes_per_sec": 0, 00:26:35.971 "w_mbytes_per_sec": 0 00:26:35.971 }, 00:26:35.971 "claimed": true, 00:26:35.971 "claim_type": "exclusive_write", 00:26:35.971 "zoned": false, 00:26:35.971 "supported_io_types": { 00:26:35.971 "read": true, 00:26:35.971 "write": true, 00:26:35.971 "unmap": true, 00:26:35.971 "flush": true, 00:26:35.971 "reset": true, 00:26:35.971 "nvme_admin": false, 00:26:35.971 "nvme_io": false, 00:26:35.971 "nvme_io_md": false, 00:26:35.971 "write_zeroes": true, 00:26:35.971 "zcopy": true, 00:26:35.971 "get_zone_info": false, 00:26:35.971 "zone_management": false, 00:26:35.971 "zone_append": false, 00:26:35.971 "compare": false, 00:26:35.971 "compare_and_write": false, 00:26:35.971 "abort": true, 00:26:35.971 "seek_hole": false, 00:26:35.971 "seek_data": false, 00:26:35.971 "copy": true, 00:26:35.971 "nvme_iov_md": false 00:26:35.971 }, 00:26:35.971 "memory_domains": [ 00:26:35.971 { 00:26:35.971 "dma_device_id": "system", 00:26:35.971 "dma_device_type": 1 00:26:35.971 }, 00:26:35.971 { 00:26:35.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.971 "dma_device_type": 2 00:26:35.971 } 00:26:35.971 ], 00:26:35.971 "driver_specific": { 00:26:35.971 "passthru": { 00:26:35.971 "name": "pt2", 00:26:35.971 "base_bdev_name": "malloc2" 00:26:35.971 } 00:26:35.971 } 00:26:35.971 }' 00:26:35.971 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:35.971 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:35.971 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:35.972 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:36.236 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:36.236 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:36.236 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:36.236 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:36.236 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:36.236 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:36.236 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:36.236 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:36.236 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:36.236 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:26:36.496 [2024-07-25 13:26:46.916163] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:36.496 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # '[' b462c4bf-61ea-49a1-8da1-8eaacef12273 '!=' b462c4bf-61ea-49a1-8da1-8eaacef12273 ']' 00:26:36.496 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:26:36.496 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:36.496 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:26:36.496 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:36.754 [2024-07-25 13:26:47.144551] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:36.754 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:36.754 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:36.754 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:36.754 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:36.754 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:36.754 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:36.754 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:36.754 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:36.754 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:36.754 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:36.754 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.754 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.012 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:37.012 "name": "raid_bdev1", 00:26:37.012 "uuid": "b462c4bf-61ea-49a1-8da1-8eaacef12273", 00:26:37.012 "strip_size_kb": 0, 00:26:37.012 "state": "online", 00:26:37.012 "raid_level": "raid1", 00:26:37.012 "superblock": true, 00:26:37.012 "num_base_bdevs": 2, 00:26:37.012 "num_base_bdevs_discovered": 1, 00:26:37.012 "num_base_bdevs_operational": 1, 00:26:37.012 "base_bdevs_list": [ 00:26:37.012 { 00:26:37.012 "name": null, 00:26:37.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.012 "is_configured": false, 00:26:37.012 "data_offset": 256, 00:26:37.012 "data_size": 7936 00:26:37.012 }, 00:26:37.012 { 00:26:37.012 "name": "pt2", 00:26:37.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:37.012 "is_configured": true, 00:26:37.012 "data_offset": 256, 00:26:37.012 "data_size": 7936 00:26:37.012 } 00:26:37.012 ] 00:26:37.012 }' 00:26:37.012 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:37.012 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:37.579 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:37.837 [2024-07-25 13:26:48.167220] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:37.837 [2024-07-25 13:26:48.167243] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:37.837 [2024-07-25 13:26:48.167286] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:37.837 [2024-07-25 13:26:48.167324] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:37.837 [2024-07-25 13:26:48.167335] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc7fc30 name raid_bdev1, state offline 00:26:37.837 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.837 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:26:38.095 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:26:38.095 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:26:38.095 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:38.095 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:38.095 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:38.354 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:38.354 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:38.354 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:26:38.354 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:38.354 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@534 -- # i=1 00:26:38.354 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:38.612 [2024-07-25 13:26:48.848989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:38.612 [2024-07-25 13:26:48.849030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.612 [2024-07-25 13:26:48.849046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe1ab60 00:26:38.612 [2024-07-25 13:26:48.849058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.612 [2024-07-25 13:26:48.850538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.612 [2024-07-25 13:26:48.850566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:38.612 [2024-07-25 13:26:48.850624] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:38.613 [2024-07-25 13:26:48.850647] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:38.613 [2024-07-25 13:26:48.850727] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc7fdd0 00:26:38.613 [2024-07-25 13:26:48.850737] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:38.613 [2024-07-25 13:26:48.850892] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe1a5d0 00:26:38.613 [2024-07-25 13:26:48.851002] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc7fdd0 00:26:38.613 [2024-07-25 13:26:48.851010] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc7fdd0 00:26:38.613 [2024-07-25 13:26:48.851102] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:38.613 pt2 00:26:38.613 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:38.613 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:38.613 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:38.613 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:38.613 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:38.613 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:38.613 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:38.613 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:38.613 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:38.613 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:38.613 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.613 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.871 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:38.871 "name": "raid_bdev1", 00:26:38.871 "uuid": "b462c4bf-61ea-49a1-8da1-8eaacef12273", 00:26:38.871 "strip_size_kb": 0, 00:26:38.871 "state": "online", 00:26:38.871 "raid_level": "raid1", 00:26:38.871 "superblock": true, 00:26:38.871 "num_base_bdevs": 2, 00:26:38.871 "num_base_bdevs_discovered": 1, 00:26:38.871 "num_base_bdevs_operational": 1, 00:26:38.871 "base_bdevs_list": [ 00:26:38.871 { 00:26:38.871 "name": null, 00:26:38.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.871 "is_configured": false, 00:26:38.871 "data_offset": 256, 00:26:38.871 "data_size": 7936 00:26:38.871 }, 00:26:38.871 { 00:26:38.871 "name": "pt2", 00:26:38.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:38.871 "is_configured": true, 00:26:38.871 "data_offset": 256, 00:26:38.871 "data_size": 7936 00:26:38.871 } 00:26:38.871 ] 00:26:38.871 }' 00:26:38.871 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:38.871 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:39.438 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:39.438 [2024-07-25 13:26:49.879802] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:39.438 [2024-07-25 13:26:49.879824] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:39.438 [2024-07-25 13:26:49.879864] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:39.438 [2024-07-25 13:26:49.879902] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:39.438 [2024-07-25 13:26:49.879912] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc7fdd0 name raid_bdev1, state offline 00:26:39.438 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.438 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:26:39.696 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:26:39.696 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:26:39.696 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:26:39.696 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:39.955 [2024-07-25 13:26:50.345018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:39.955 [2024-07-25 13:26:50.345059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.955 [2024-07-25 13:26:50.345075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe19b00 00:26:39.955 [2024-07-25 13:26:50.345087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.955 [2024-07-25 13:26:50.346575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.955 [2024-07-25 13:26:50.346602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:39.955 [2024-07-25 13:26:50.346659] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:39.955 [2024-07-25 13:26:50.346681] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:39.955 [2024-07-25 13:26:50.346768] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:39.955 [2024-07-25 13:26:50.346780] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:39.955 [2024-07-25 13:26:50.346792] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe1f330 name raid_bdev1, state configuring 00:26:39.955 [2024-07-25 13:26:50.346814] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:39.955 [2024-07-25 13:26:50.346861] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xe1f330 00:26:39.955 [2024-07-25 13:26:50.346870] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:39.955 [2024-07-25 13:26:50.347018] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe19ec0 00:26:39.955 [2024-07-25 13:26:50.347128] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe1f330 00:26:39.955 [2024-07-25 13:26:50.347137] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe1f330 00:26:39.955 [2024-07-25 13:26:50.347237] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.955 pt1 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.955 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.213 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:40.214 "name": "raid_bdev1", 00:26:40.214 "uuid": "b462c4bf-61ea-49a1-8da1-8eaacef12273", 00:26:40.214 "strip_size_kb": 0, 00:26:40.214 "state": "online", 00:26:40.214 "raid_level": "raid1", 00:26:40.214 "superblock": true, 00:26:40.214 "num_base_bdevs": 2, 00:26:40.214 "num_base_bdevs_discovered": 1, 00:26:40.214 "num_base_bdevs_operational": 1, 00:26:40.214 "base_bdevs_list": [ 00:26:40.214 { 00:26:40.214 "name": null, 00:26:40.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.214 "is_configured": false, 00:26:40.214 "data_offset": 256, 00:26:40.214 "data_size": 7936 00:26:40.214 }, 00:26:40.214 { 00:26:40.214 "name": "pt2", 00:26:40.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:40.214 "is_configured": true, 00:26:40.214 "data_offset": 256, 00:26:40.214 "data_size": 7936 00:26:40.214 } 00:26:40.214 ] 00:26:40.214 }' 00:26:40.214 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:40.214 13:26:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:40.781 13:26:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:40.781 13:26:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:41.053 13:26:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:26:41.054 13:26:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:26:41.054 13:26:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:41.313 [2024-07-25 13:26:51.604526] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:41.313 13:26:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # '[' b462c4bf-61ea-49a1-8da1-8eaacef12273 '!=' b462c4bf-61ea-49a1-8da1-8eaacef12273 ']' 00:26:41.313 13:26:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@578 -- # killprocess 996153 00:26:41.313 13:26:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 996153 ']' 00:26:41.313 13:26:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 996153 00:26:41.313 13:26:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:26:41.313 13:26:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:41.313 13:26:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 996153 00:26:41.313 13:26:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:41.313 13:26:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:41.313 13:26:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 996153' 00:26:41.313 killing process with pid 996153 00:26:41.313 13:26:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 996153 00:26:41.313 [2024-07-25 13:26:51.681641] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:41.313 [2024-07-25 13:26:51.681688] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:41.313 [2024-07-25 13:26:51.681725] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:41.313 [2024-07-25 13:26:51.681735] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe1f330 name raid_bdev1, state offline 00:26:41.313 13:26:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 996153 00:26:41.313 [2024-07-25 13:26:51.697383] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:41.572 13:26:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@580 -- # return 0 00:26:41.572 00:26:41.572 real 0m14.643s 00:26:41.572 user 0m26.464s 00:26:41.572 sys 0m2.809s 00:26:41.572 13:26:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:41.572 13:26:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:41.572 ************************************ 00:26:41.572 END TEST raid_superblock_test_4k 00:26:41.572 ************************************ 00:26:41.572 13:26:51 bdev_raid -- bdev/bdev_raid.sh@980 -- # '[' true = true ']' 00:26:41.572 13:26:51 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:26:41.572 13:26:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:41.572 13:26:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:41.572 13:26:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:41.572 ************************************ 00:26:41.572 START TEST raid_rebuild_test_sb_4k 00:26:41.572 ************************************ 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # local verify=true 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # local strip_size 00:26:41.572 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # local create_arg 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@594 -- # local data_offset 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # raid_pid=999016 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # waitforlisten 999016 /var/tmp/spdk-raid.sock 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 999016 ']' 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:41.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:41.573 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:41.573 [2024-07-25 13:26:52.036771] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:26:41.573 [2024-07-25 13:26:52.036827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999016 ] 00:26:41.573 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:41.573 Zero copy mechanism will not be used. 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:41.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.832 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:41.832 [2024-07-25 13:26:52.167915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.832 [2024-07-25 13:26:52.255160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.832 [2024-07-25 13:26:52.318117] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:41.832 [2024-07-25 13:26:52.318158] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:42.769 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:42.769 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:26:42.769 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:42.769 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:26:42.769 BaseBdev1_malloc 00:26:42.769 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:43.029 [2024-07-25 13:26:53.355907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:43.029 [2024-07-25 13:26:53.355948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:43.029 [2024-07-25 13:26:53.355967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb5c5f0 00:26:43.029 [2024-07-25 13:26:53.355978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:43.029 [2024-07-25 13:26:53.357458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:43.029 [2024-07-25 13:26:53.357485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:43.029 BaseBdev1 00:26:43.029 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:43.029 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:26:43.289 BaseBdev2_malloc 00:26:43.289 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:43.548 [2024-07-25 13:26:53.817527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:43.548 [2024-07-25 13:26:53.817568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:43.548 [2024-07-25 13:26:53.817586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcfffd0 00:26:43.548 [2024-07-25 13:26:53.817597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:43.548 [2024-07-25 13:26:53.819005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:43.548 [2024-07-25 13:26:53.819031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:43.548 BaseBdev2 00:26:43.548 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:26:43.807 spare_malloc 00:26:43.807 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:43.807 spare_delay 00:26:43.807 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:44.065 [2024-07-25 13:26:54.487581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:44.065 [2024-07-25 13:26:54.487621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:44.065 [2024-07-25 13:26:54.487638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcf4340 00:26:44.065 [2024-07-25 13:26:54.487650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:44.065 [2024-07-25 13:26:54.489059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:44.065 [2024-07-25 13:26:54.489087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:44.066 spare 00:26:44.066 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:44.324 [2024-07-25 13:26:54.708204] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:44.324 [2024-07-25 13:26:54.709377] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:44.324 [2024-07-25 13:26:54.709517] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xb54290 00:26:44.324 [2024-07-25 13:26:54.709528] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:44.324 [2024-07-25 13:26:54.709707] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb56de0 00:26:44.324 [2024-07-25 13:26:54.709832] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb54290 00:26:44.324 [2024-07-25 13:26:54.709842] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb54290 00:26:44.324 [2024-07-25 13:26:54.709941] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:44.324 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:44.324 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:44.324 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:44.324 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:44.324 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:44.324 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:44.324 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.324 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.324 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.324 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.324 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.324 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.583 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:44.583 "name": "raid_bdev1", 00:26:44.583 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:44.583 "strip_size_kb": 0, 00:26:44.583 "state": "online", 00:26:44.583 "raid_level": "raid1", 00:26:44.583 "superblock": true, 00:26:44.583 "num_base_bdevs": 2, 00:26:44.583 "num_base_bdevs_discovered": 2, 00:26:44.583 "num_base_bdevs_operational": 2, 00:26:44.583 "base_bdevs_list": [ 00:26:44.583 { 00:26:44.583 "name": "BaseBdev1", 00:26:44.583 "uuid": "9062e658-987e-542d-a10f-ce5576130e86", 00:26:44.583 "is_configured": true, 00:26:44.583 "data_offset": 256, 00:26:44.583 "data_size": 7936 00:26:44.583 }, 00:26:44.583 { 00:26:44.583 "name": "BaseBdev2", 00:26:44.583 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:44.583 "is_configured": true, 00:26:44.583 "data_offset": 256, 00:26:44.583 "data_size": 7936 00:26:44.583 } 00:26:44.583 ] 00:26:44.583 }' 00:26:44.583 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:44.583 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:45.150 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:45.150 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:26:45.408 [2024-07-25 13:26:55.731144] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:45.409 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:26:45.409 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.409 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:45.667 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:45.926 [2024-07-25 13:26:56.188149] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb56de0 00:26:45.926 /dev/nbd0 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:45.926 1+0 records in 00:26:45.926 1+0 records out 00:26:45.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256689 s, 16.0 MB/s 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:26:45.926 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:26:46.863 7936+0 records in 00:26:46.863 7936+0 records out 00:26:46.863 32505856 bytes (33 MB, 31 MiB) copied, 0.754361 s, 43.1 MB/s 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:46.863 [2024-07-25 13:26:57.250672] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:26:46.863 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:47.122 [2024-07-25 13:26:57.471286] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:47.122 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:47.122 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:47.122 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:47.122 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:47.122 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:47.122 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:47.122 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:47.122 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:47.122 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:47.122 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:47.122 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.122 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.411 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:47.411 "name": "raid_bdev1", 00:26:47.411 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:47.411 "strip_size_kb": 0, 00:26:47.411 "state": "online", 00:26:47.411 "raid_level": "raid1", 00:26:47.411 "superblock": true, 00:26:47.411 "num_base_bdevs": 2, 00:26:47.411 "num_base_bdevs_discovered": 1, 00:26:47.411 "num_base_bdevs_operational": 1, 00:26:47.411 "base_bdevs_list": [ 00:26:47.411 { 00:26:47.411 "name": null, 00:26:47.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.411 "is_configured": false, 00:26:47.411 "data_offset": 256, 00:26:47.411 "data_size": 7936 00:26:47.411 }, 00:26:47.411 { 00:26:47.411 "name": "BaseBdev2", 00:26:47.411 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:47.411 "is_configured": true, 00:26:47.411 "data_offset": 256, 00:26:47.412 "data_size": 7936 00:26:47.412 } 00:26:47.412 ] 00:26:47.412 }' 00:26:47.412 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:47.412 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:47.979 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:48.239 [2024-07-25 13:26:58.493992] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:48.239 [2024-07-25 13:26:58.498711] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb57010 00:26:48.239 [2024-07-25 13:26:58.500731] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:48.239 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:49.176 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:49.176 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:49.176 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:49.176 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:49.176 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:49.176 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.176 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.435 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:49.435 "name": "raid_bdev1", 00:26:49.435 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:49.435 "strip_size_kb": 0, 00:26:49.435 "state": "online", 00:26:49.435 "raid_level": "raid1", 00:26:49.435 "superblock": true, 00:26:49.435 "num_base_bdevs": 2, 00:26:49.435 "num_base_bdevs_discovered": 2, 00:26:49.435 "num_base_bdevs_operational": 2, 00:26:49.435 "process": { 00:26:49.435 "type": "rebuild", 00:26:49.435 "target": "spare", 00:26:49.435 "progress": { 00:26:49.435 "blocks": 3072, 00:26:49.435 "percent": 38 00:26:49.435 } 00:26:49.435 }, 00:26:49.435 "base_bdevs_list": [ 00:26:49.435 { 00:26:49.435 "name": "spare", 00:26:49.435 "uuid": "996a0f52-242f-5898-b6b1-5ff366258d2d", 00:26:49.435 "is_configured": true, 00:26:49.435 "data_offset": 256, 00:26:49.435 "data_size": 7936 00:26:49.435 }, 00:26:49.435 { 00:26:49.435 "name": "BaseBdev2", 00:26:49.435 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:49.435 "is_configured": true, 00:26:49.435 "data_offset": 256, 00:26:49.435 "data_size": 7936 00:26:49.435 } 00:26:49.435 ] 00:26:49.435 }' 00:26:49.435 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:49.435 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:49.435 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:49.435 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:49.435 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:49.694 [2024-07-25 13:27:00.046990] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:49.694 [2024-07-25 13:27:00.112436] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:49.694 [2024-07-25 13:27:00.112483] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:49.694 [2024-07-25 13:27:00.112498] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:49.694 [2024-07-25 13:27:00.112506] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:49.694 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:49.694 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:49.694 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:49.694 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:49.694 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:49.695 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:49.695 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:49.695 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:49.695 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:49.695 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:49.695 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.695 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.954 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:49.954 "name": "raid_bdev1", 00:26:49.954 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:49.954 "strip_size_kb": 0, 00:26:49.954 "state": "online", 00:26:49.954 "raid_level": "raid1", 00:26:49.954 "superblock": true, 00:26:49.954 "num_base_bdevs": 2, 00:26:49.954 "num_base_bdevs_discovered": 1, 00:26:49.954 "num_base_bdevs_operational": 1, 00:26:49.954 "base_bdevs_list": [ 00:26:49.954 { 00:26:49.954 "name": null, 00:26:49.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.954 "is_configured": false, 00:26:49.954 "data_offset": 256, 00:26:49.954 "data_size": 7936 00:26:49.954 }, 00:26:49.954 { 00:26:49.954 "name": "BaseBdev2", 00:26:49.954 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:49.954 "is_configured": true, 00:26:49.954 "data_offset": 256, 00:26:49.954 "data_size": 7936 00:26:49.954 } 00:26:49.954 ] 00:26:49.954 }' 00:26:49.954 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:49.954 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:50.521 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:50.521 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:50.521 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:50.521 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:50.522 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:50.522 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.522 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.779 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:50.779 "name": "raid_bdev1", 00:26:50.779 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:50.779 "strip_size_kb": 0, 00:26:50.779 "state": "online", 00:26:50.779 "raid_level": "raid1", 00:26:50.779 "superblock": true, 00:26:50.779 "num_base_bdevs": 2, 00:26:50.779 "num_base_bdevs_discovered": 1, 00:26:50.779 "num_base_bdevs_operational": 1, 00:26:50.779 "base_bdevs_list": [ 00:26:50.779 { 00:26:50.779 "name": null, 00:26:50.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.779 "is_configured": false, 00:26:50.779 "data_offset": 256, 00:26:50.779 "data_size": 7936 00:26:50.779 }, 00:26:50.779 { 00:26:50.779 "name": "BaseBdev2", 00:26:50.779 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:50.779 "is_configured": true, 00:26:50.779 "data_offset": 256, 00:26:50.779 "data_size": 7936 00:26:50.779 } 00:26:50.779 ] 00:26:50.779 }' 00:26:50.779 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:50.779 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:50.779 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:50.779 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:50.779 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:51.037 [2024-07-25 13:27:01.464422] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:51.037 [2024-07-25 13:27:01.469117] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x85ab40 00:26:51.037 [2024-07-25 13:27:01.470548] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:51.037 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@678 -- # sleep 1 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.415 "name": "raid_bdev1", 00:26:52.415 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:52.415 "strip_size_kb": 0, 00:26:52.415 "state": "online", 00:26:52.415 "raid_level": "raid1", 00:26:52.415 "superblock": true, 00:26:52.415 "num_base_bdevs": 2, 00:26:52.415 "num_base_bdevs_discovered": 2, 00:26:52.415 "num_base_bdevs_operational": 2, 00:26:52.415 "process": { 00:26:52.415 "type": "rebuild", 00:26:52.415 "target": "spare", 00:26:52.415 "progress": { 00:26:52.415 "blocks": 3072, 00:26:52.415 "percent": 38 00:26:52.415 } 00:26:52.415 }, 00:26:52.415 "base_bdevs_list": [ 00:26:52.415 { 00:26:52.415 "name": "spare", 00:26:52.415 "uuid": "996a0f52-242f-5898-b6b1-5ff366258d2d", 00:26:52.415 "is_configured": true, 00:26:52.415 "data_offset": 256, 00:26:52.415 "data_size": 7936 00:26:52.415 }, 00:26:52.415 { 00:26:52.415 "name": "BaseBdev2", 00:26:52.415 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:52.415 "is_configured": true, 00:26:52.415 "data_offset": 256, 00:26:52.415 "data_size": 7936 00:26:52.415 } 00:26:52.415 ] 00:26:52.415 }' 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:26:52.415 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # local timeout=977 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.415 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.674 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.674 "name": "raid_bdev1", 00:26:52.674 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:52.674 "strip_size_kb": 0, 00:26:52.674 "state": "online", 00:26:52.674 "raid_level": "raid1", 00:26:52.674 "superblock": true, 00:26:52.674 "num_base_bdevs": 2, 00:26:52.674 "num_base_bdevs_discovered": 2, 00:26:52.674 "num_base_bdevs_operational": 2, 00:26:52.674 "process": { 00:26:52.674 "type": "rebuild", 00:26:52.674 "target": "spare", 00:26:52.674 "progress": { 00:26:52.674 "blocks": 3840, 00:26:52.674 "percent": 48 00:26:52.674 } 00:26:52.674 }, 00:26:52.674 "base_bdevs_list": [ 00:26:52.674 { 00:26:52.674 "name": "spare", 00:26:52.674 "uuid": "996a0f52-242f-5898-b6b1-5ff366258d2d", 00:26:52.674 "is_configured": true, 00:26:52.674 "data_offset": 256, 00:26:52.674 "data_size": 7936 00:26:52.674 }, 00:26:52.674 { 00:26:52.674 "name": "BaseBdev2", 00:26:52.674 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:52.674 "is_configured": true, 00:26:52.674 "data_offset": 256, 00:26:52.674 "data_size": 7936 00:26:52.674 } 00:26:52.674 ] 00:26:52.674 }' 00:26:52.674 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.674 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:52.674 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:52.674 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:52.674 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:54.051 "name": "raid_bdev1", 00:26:54.051 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:54.051 "strip_size_kb": 0, 00:26:54.051 "state": "online", 00:26:54.051 "raid_level": "raid1", 00:26:54.051 "superblock": true, 00:26:54.051 "num_base_bdevs": 2, 00:26:54.051 "num_base_bdevs_discovered": 2, 00:26:54.051 "num_base_bdevs_operational": 2, 00:26:54.051 "process": { 00:26:54.051 "type": "rebuild", 00:26:54.051 "target": "spare", 00:26:54.051 "progress": { 00:26:54.051 "blocks": 7168, 00:26:54.051 "percent": 90 00:26:54.051 } 00:26:54.051 }, 00:26:54.051 "base_bdevs_list": [ 00:26:54.051 { 00:26:54.051 "name": "spare", 00:26:54.051 "uuid": "996a0f52-242f-5898-b6b1-5ff366258d2d", 00:26:54.051 "is_configured": true, 00:26:54.051 "data_offset": 256, 00:26:54.051 "data_size": 7936 00:26:54.051 }, 00:26:54.051 { 00:26:54.051 "name": "BaseBdev2", 00:26:54.051 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:54.051 "is_configured": true, 00:26:54.051 "data_offset": 256, 00:26:54.051 "data_size": 7936 00:26:54.051 } 00:26:54.051 ] 00:26:54.051 }' 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:54.051 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:54.310 [2024-07-25 13:27:04.592917] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:54.310 [2024-07-25 13:27:04.592969] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:54.310 [2024-07-25 13:27:04.593043] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:55.246 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:55.246 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:55.246 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:55.246 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:55.246 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:55.246 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:55.246 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.246 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.246 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:55.246 "name": "raid_bdev1", 00:26:55.246 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:55.246 "strip_size_kb": 0, 00:26:55.246 "state": "online", 00:26:55.246 "raid_level": "raid1", 00:26:55.246 "superblock": true, 00:26:55.246 "num_base_bdevs": 2, 00:26:55.246 "num_base_bdevs_discovered": 2, 00:26:55.246 "num_base_bdevs_operational": 2, 00:26:55.246 "base_bdevs_list": [ 00:26:55.246 { 00:26:55.246 "name": "spare", 00:26:55.246 "uuid": "996a0f52-242f-5898-b6b1-5ff366258d2d", 00:26:55.246 "is_configured": true, 00:26:55.246 "data_offset": 256, 00:26:55.246 "data_size": 7936 00:26:55.246 }, 00:26:55.247 { 00:26:55.247 "name": "BaseBdev2", 00:26:55.247 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:55.247 "is_configured": true, 00:26:55.247 "data_offset": 256, 00:26:55.247 "data_size": 7936 00:26:55.247 } 00:26:55.247 ] 00:26:55.247 }' 00:26:55.247 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:55.505 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:55.505 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:55.505 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:55.505 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@724 -- # break 00:26:55.505 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:55.505 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:55.505 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:55.505 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:55.505 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:55.505 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.505 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.764 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:55.764 "name": "raid_bdev1", 00:26:55.764 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:55.764 "strip_size_kb": 0, 00:26:55.764 "state": "online", 00:26:55.764 "raid_level": "raid1", 00:26:55.764 "superblock": true, 00:26:55.764 "num_base_bdevs": 2, 00:26:55.764 "num_base_bdevs_discovered": 2, 00:26:55.764 "num_base_bdevs_operational": 2, 00:26:55.764 "base_bdevs_list": [ 00:26:55.764 { 00:26:55.765 "name": "spare", 00:26:55.765 "uuid": "996a0f52-242f-5898-b6b1-5ff366258d2d", 00:26:55.765 "is_configured": true, 00:26:55.765 "data_offset": 256, 00:26:55.765 "data_size": 7936 00:26:55.765 }, 00:26:55.765 { 00:26:55.765 "name": "BaseBdev2", 00:26:55.765 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:55.765 "is_configured": true, 00:26:55.765 "data_offset": 256, 00:26:55.765 "data_size": 7936 00:26:55.765 } 00:26:55.765 ] 00:26:55.765 }' 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.765 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.023 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:56.023 "name": "raid_bdev1", 00:26:56.023 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:56.023 "strip_size_kb": 0, 00:26:56.023 "state": "online", 00:26:56.023 "raid_level": "raid1", 00:26:56.023 "superblock": true, 00:26:56.023 "num_base_bdevs": 2, 00:26:56.023 "num_base_bdevs_discovered": 2, 00:26:56.023 "num_base_bdevs_operational": 2, 00:26:56.023 "base_bdevs_list": [ 00:26:56.023 { 00:26:56.023 "name": "spare", 00:26:56.023 "uuid": "996a0f52-242f-5898-b6b1-5ff366258d2d", 00:26:56.023 "is_configured": true, 00:26:56.023 "data_offset": 256, 00:26:56.023 "data_size": 7936 00:26:56.023 }, 00:26:56.023 { 00:26:56.023 "name": "BaseBdev2", 00:26:56.023 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:56.023 "is_configured": true, 00:26:56.023 "data_offset": 256, 00:26:56.023 "data_size": 7936 00:26:56.023 } 00:26:56.023 ] 00:26:56.023 }' 00:26:56.023 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:56.023 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:56.589 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:56.847 [2024-07-25 13:27:07.095998] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:56.847 [2024-07-25 13:27:07.096020] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:56.847 [2024-07-25 13:27:07.096069] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:56.847 [2024-07-25 13:27:07.096118] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:56.847 [2024-07-25 13:27:07.096129] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb54290 name raid_bdev1, state offline 00:26:56.847 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # jq length 00:26:56.847 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.105 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:26:57.105 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:26:57.106 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:26:57.106 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:57.106 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:57.106 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:57.106 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:57.106 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.106 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:57.106 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:26:57.106 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:57.106 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:57.106 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:57.106 /dev/nbd0 00:26:57.364 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:57.364 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:57.364 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:57.364 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:26:57.364 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:57.365 1+0 records in 00:26:57.365 1+0 records out 00:26:57.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255224 s, 16.0 MB/s 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:57.365 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:57.365 /dev/nbd1 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:57.623 1+0 records in 00:26:57.623 1+0 records out 00:26:57.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028574 s, 14.3 MB/s 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:57.623 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:57.881 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:57.881 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:57.881 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:57.881 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.881 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.881 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:57.881 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:26:57.881 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.881 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:57.881 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:58.139 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:58.139 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:58.139 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:58.139 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:58.139 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:58.139 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:58.139 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:26:58.139 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:26:58.139 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:26:58.139 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:58.397 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:58.654 [2024-07-25 13:27:08.889492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:58.655 [2024-07-25 13:27:08.889530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.655 [2024-07-25 13:27:08.889548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb554a0 00:26:58.655 [2024-07-25 13:27:08.889559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.655 [2024-07-25 13:27:08.891054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.655 [2024-07-25 13:27:08.891082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:58.655 [2024-07-25 13:27:08.891166] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:58.655 [2024-07-25 13:27:08.891190] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:58.655 [2024-07-25 13:27:08.891282] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:58.655 spare 00:26:58.655 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:58.655 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:58.655 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:58.655 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:58.655 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:58.655 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:58.655 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:58.655 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:58.655 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:58.655 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:58.655 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.655 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.655 [2024-07-25 13:27:08.991588] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xb53b00 00:26:58.655 [2024-07-25 13:27:08.991601] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:58.655 [2024-07-25 13:27:08.991773] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb54410 00:26:58.655 [2024-07-25 13:27:08.991906] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb53b00 00:26:58.655 [2024-07-25 13:27:08.991916] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb53b00 00:26:58.655 [2024-07-25 13:27:08.992009] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:58.913 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:58.913 "name": "raid_bdev1", 00:26:58.913 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:58.913 "strip_size_kb": 0, 00:26:58.913 "state": "online", 00:26:58.913 "raid_level": "raid1", 00:26:58.913 "superblock": true, 00:26:58.913 "num_base_bdevs": 2, 00:26:58.913 "num_base_bdevs_discovered": 2, 00:26:58.913 "num_base_bdevs_operational": 2, 00:26:58.913 "base_bdevs_list": [ 00:26:58.913 { 00:26:58.913 "name": "spare", 00:26:58.913 "uuid": "996a0f52-242f-5898-b6b1-5ff366258d2d", 00:26:58.913 "is_configured": true, 00:26:58.913 "data_offset": 256, 00:26:58.913 "data_size": 7936 00:26:58.913 }, 00:26:58.913 { 00:26:58.913 "name": "BaseBdev2", 00:26:58.914 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:58.914 "is_configured": true, 00:26:58.914 "data_offset": 256, 00:26:58.914 "data_size": 7936 00:26:58.914 } 00:26:58.914 ] 00:26:58.914 }' 00:26:58.914 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:58.914 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:59.480 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:59.480 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:59.480 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:59.480 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:59.480 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:59.480 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.480 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.480 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:59.480 "name": "raid_bdev1", 00:26:59.480 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:26:59.480 "strip_size_kb": 0, 00:26:59.480 "state": "online", 00:26:59.480 "raid_level": "raid1", 00:26:59.481 "superblock": true, 00:26:59.481 "num_base_bdevs": 2, 00:26:59.481 "num_base_bdevs_discovered": 2, 00:26:59.481 "num_base_bdevs_operational": 2, 00:26:59.481 "base_bdevs_list": [ 00:26:59.481 { 00:26:59.481 "name": "spare", 00:26:59.481 "uuid": "996a0f52-242f-5898-b6b1-5ff366258d2d", 00:26:59.481 "is_configured": true, 00:26:59.481 "data_offset": 256, 00:26:59.481 "data_size": 7936 00:26:59.481 }, 00:26:59.481 { 00:26:59.481 "name": "BaseBdev2", 00:26:59.481 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:26:59.481 "is_configured": true, 00:26:59.481 "data_offset": 256, 00:26:59.481 "data_size": 7936 00:26:59.481 } 00:26:59.481 ] 00:26:59.481 }' 00:26:59.481 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:59.739 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:59.739 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:59.739 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:59.739 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.739 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:59.998 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:26:59.998 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:00.256 [2024-07-25 13:27:10.490023] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:00.256 "name": "raid_bdev1", 00:27:00.256 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:27:00.256 "strip_size_kb": 0, 00:27:00.256 "state": "online", 00:27:00.256 "raid_level": "raid1", 00:27:00.256 "superblock": true, 00:27:00.256 "num_base_bdevs": 2, 00:27:00.256 "num_base_bdevs_discovered": 1, 00:27:00.256 "num_base_bdevs_operational": 1, 00:27:00.256 "base_bdevs_list": [ 00:27:00.256 { 00:27:00.256 "name": null, 00:27:00.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.256 "is_configured": false, 00:27:00.256 "data_offset": 256, 00:27:00.256 "data_size": 7936 00:27:00.256 }, 00:27:00.256 { 00:27:00.256 "name": "BaseBdev2", 00:27:00.256 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:27:00.256 "is_configured": true, 00:27:00.256 "data_offset": 256, 00:27:00.256 "data_size": 7936 00:27:00.256 } 00:27:00.256 ] 00:27:00.256 }' 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:00.256 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:00.822 13:27:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:01.080 [2024-07-25 13:27:11.516742] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:01.080 [2024-07-25 13:27:11.516871] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:01.080 [2024-07-25 13:27:11.516886] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:01.080 [2024-07-25 13:27:11.516913] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:01.080 [2024-07-25 13:27:11.521505] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb59e90 00:27:01.080 [2024-07-25 13:27:11.523639] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:01.080 13:27:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # sleep 1 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:02.488 "name": "raid_bdev1", 00:27:02.488 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:27:02.488 "strip_size_kb": 0, 00:27:02.488 "state": "online", 00:27:02.488 "raid_level": "raid1", 00:27:02.488 "superblock": true, 00:27:02.488 "num_base_bdevs": 2, 00:27:02.488 "num_base_bdevs_discovered": 2, 00:27:02.488 "num_base_bdevs_operational": 2, 00:27:02.488 "process": { 00:27:02.488 "type": "rebuild", 00:27:02.488 "target": "spare", 00:27:02.488 "progress": { 00:27:02.488 "blocks": 3072, 00:27:02.488 "percent": 38 00:27:02.488 } 00:27:02.488 }, 00:27:02.488 "base_bdevs_list": [ 00:27:02.488 { 00:27:02.488 "name": "spare", 00:27:02.488 "uuid": "996a0f52-242f-5898-b6b1-5ff366258d2d", 00:27:02.488 "is_configured": true, 00:27:02.488 "data_offset": 256, 00:27:02.488 "data_size": 7936 00:27:02.488 }, 00:27:02.488 { 00:27:02.488 "name": "BaseBdev2", 00:27:02.488 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:27:02.488 "is_configured": true, 00:27:02.488 "data_offset": 256, 00:27:02.488 "data_size": 7936 00:27:02.488 } 00:27:02.488 ] 00:27:02.488 }' 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:02.488 13:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:02.747 [2024-07-25 13:27:13.062720] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:02.748 [2024-07-25 13:27:13.135294] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:02.748 [2024-07-25 13:27:13.135335] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.748 [2024-07-25 13:27:13.135348] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:02.748 [2024-07-25 13:27:13.135356] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:02.748 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:02.748 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:02.748 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:02.748 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.748 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.748 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:02.748 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.748 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.748 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.748 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.748 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.748 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.007 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:03.007 "name": "raid_bdev1", 00:27:03.007 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:27:03.007 "strip_size_kb": 0, 00:27:03.007 "state": "online", 00:27:03.007 "raid_level": "raid1", 00:27:03.007 "superblock": true, 00:27:03.007 "num_base_bdevs": 2, 00:27:03.007 "num_base_bdevs_discovered": 1, 00:27:03.007 "num_base_bdevs_operational": 1, 00:27:03.007 "base_bdevs_list": [ 00:27:03.007 { 00:27:03.007 "name": null, 00:27:03.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.007 "is_configured": false, 00:27:03.007 "data_offset": 256, 00:27:03.007 "data_size": 7936 00:27:03.007 }, 00:27:03.007 { 00:27:03.007 "name": "BaseBdev2", 00:27:03.007 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:27:03.007 "is_configured": true, 00:27:03.007 "data_offset": 256, 00:27:03.007 "data_size": 7936 00:27:03.007 } 00:27:03.007 ] 00:27:03.007 }' 00:27:03.007 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:03.007 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.575 13:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:03.834 [2024-07-25 13:27:14.174132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:03.834 [2024-07-25 13:27:14.174182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:03.834 [2024-07-25 13:27:14.174201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcf52f0 00:27:03.834 [2024-07-25 13:27:14.174213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:03.834 [2024-07-25 13:27:14.174548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:03.834 [2024-07-25 13:27:14.174566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:03.834 [2024-07-25 13:27:14.174637] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:03.834 [2024-07-25 13:27:14.174648] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:03.834 [2024-07-25 13:27:14.174659] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:03.834 [2024-07-25 13:27:14.174676] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:03.834 [2024-07-25 13:27:14.179300] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb56370 00:27:03.834 spare 00:27:03.834 [2024-07-25 13:27:14.180572] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:03.834 13:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # sleep 1 00:27:04.769 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:04.769 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:04.769 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:04.769 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:04.769 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:04.769 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.769 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.027 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:05.027 "name": "raid_bdev1", 00:27:05.027 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:27:05.027 "strip_size_kb": 0, 00:27:05.027 "state": "online", 00:27:05.027 "raid_level": "raid1", 00:27:05.027 "superblock": true, 00:27:05.027 "num_base_bdevs": 2, 00:27:05.027 "num_base_bdevs_discovered": 2, 00:27:05.028 "num_base_bdevs_operational": 2, 00:27:05.028 "process": { 00:27:05.028 "type": "rebuild", 00:27:05.028 "target": "spare", 00:27:05.028 "progress": { 00:27:05.028 "blocks": 3072, 00:27:05.028 "percent": 38 00:27:05.028 } 00:27:05.028 }, 00:27:05.028 "base_bdevs_list": [ 00:27:05.028 { 00:27:05.028 "name": "spare", 00:27:05.028 "uuid": "996a0f52-242f-5898-b6b1-5ff366258d2d", 00:27:05.028 "is_configured": true, 00:27:05.028 "data_offset": 256, 00:27:05.028 "data_size": 7936 00:27:05.028 }, 00:27:05.028 { 00:27:05.028 "name": "BaseBdev2", 00:27:05.028 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:27:05.028 "is_configured": true, 00:27:05.028 "data_offset": 256, 00:27:05.028 "data_size": 7936 00:27:05.028 } 00:27:05.028 ] 00:27:05.028 }' 00:27:05.028 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:05.028 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:05.028 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:05.286 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:05.286 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:05.286 [2024-07-25 13:27:15.731691] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:05.545 [2024-07-25 13:27:15.792459] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:05.545 [2024-07-25 13:27:15.792502] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:05.545 [2024-07-25 13:27:15.792516] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:05.545 [2024-07-25 13:27:15.792523] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:05.545 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:05.545 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:05.545 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:05.545 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:05.545 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:05.545 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:05.545 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:05.545 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:05.545 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:05.545 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:05.545 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.545 13:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.803 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:05.803 "name": "raid_bdev1", 00:27:05.803 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:27:05.803 "strip_size_kb": 0, 00:27:05.803 "state": "online", 00:27:05.803 "raid_level": "raid1", 00:27:05.803 "superblock": true, 00:27:05.803 "num_base_bdevs": 2, 00:27:05.803 "num_base_bdevs_discovered": 1, 00:27:05.803 "num_base_bdevs_operational": 1, 00:27:05.803 "base_bdevs_list": [ 00:27:05.803 { 00:27:05.803 "name": null, 00:27:05.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.803 "is_configured": false, 00:27:05.803 "data_offset": 256, 00:27:05.803 "data_size": 7936 00:27:05.803 }, 00:27:05.803 { 00:27:05.803 "name": "BaseBdev2", 00:27:05.803 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:27:05.803 "is_configured": true, 00:27:05.803 "data_offset": 256, 00:27:05.803 "data_size": 7936 00:27:05.803 } 00:27:05.803 ] 00:27:05.803 }' 00:27:05.803 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:05.803 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.370 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:06.370 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:06.370 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:06.370 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:06.370 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:06.370 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.370 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.370 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:06.370 "name": "raid_bdev1", 00:27:06.370 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:27:06.370 "strip_size_kb": 0, 00:27:06.370 "state": "online", 00:27:06.370 "raid_level": "raid1", 00:27:06.370 "superblock": true, 00:27:06.370 "num_base_bdevs": 2, 00:27:06.370 "num_base_bdevs_discovered": 1, 00:27:06.370 "num_base_bdevs_operational": 1, 00:27:06.370 "base_bdevs_list": [ 00:27:06.370 { 00:27:06.370 "name": null, 00:27:06.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.370 "is_configured": false, 00:27:06.370 "data_offset": 256, 00:27:06.370 "data_size": 7936 00:27:06.370 }, 00:27:06.370 { 00:27:06.370 "name": "BaseBdev2", 00:27:06.370 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:27:06.370 "is_configured": true, 00:27:06.370 "data_offset": 256, 00:27:06.370 "data_size": 7936 00:27:06.370 } 00:27:06.370 ] 00:27:06.370 }' 00:27:06.370 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:06.628 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:06.628 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:06.628 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:06.628 13:27:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:06.888 13:27:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:06.888 [2024-07-25 13:27:17.360784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:06.888 [2024-07-25 13:27:17.360829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.888 [2024-07-25 13:27:17.360848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb55ef0 00:27:06.888 [2024-07-25 13:27:17.360859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.888 [2024-07-25 13:27:17.361182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.888 [2024-07-25 13:27:17.361198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:06.888 [2024-07-25 13:27:17.361254] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:06.888 [2024-07-25 13:27:17.361265] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:06.888 [2024-07-25 13:27:17.361274] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:06.888 BaseBdev1 00:27:07.147 13:27:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@789 -- # sleep 1 00:27:08.083 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:08.083 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:08.083 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:08.083 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:08.083 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:08.083 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:08.083 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:08.083 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:08.083 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:08.083 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:08.083 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.083 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.342 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:08.342 "name": "raid_bdev1", 00:27:08.342 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:27:08.342 "strip_size_kb": 0, 00:27:08.342 "state": "online", 00:27:08.342 "raid_level": "raid1", 00:27:08.342 "superblock": true, 00:27:08.342 "num_base_bdevs": 2, 00:27:08.342 "num_base_bdevs_discovered": 1, 00:27:08.342 "num_base_bdevs_operational": 1, 00:27:08.342 "base_bdevs_list": [ 00:27:08.342 { 00:27:08.342 "name": null, 00:27:08.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.342 "is_configured": false, 00:27:08.342 "data_offset": 256, 00:27:08.342 "data_size": 7936 00:27:08.342 }, 00:27:08.342 { 00:27:08.342 "name": "BaseBdev2", 00:27:08.342 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:27:08.342 "is_configured": true, 00:27:08.342 "data_offset": 256, 00:27:08.342 "data_size": 7936 00:27:08.342 } 00:27:08.342 ] 00:27:08.342 }' 00:27:08.342 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:08.342 13:27:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:08.910 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:08.910 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:08.910 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:08.910 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:08.910 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:08.910 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.910 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:09.169 "name": "raid_bdev1", 00:27:09.169 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:27:09.169 "strip_size_kb": 0, 00:27:09.169 "state": "online", 00:27:09.169 "raid_level": "raid1", 00:27:09.169 "superblock": true, 00:27:09.169 "num_base_bdevs": 2, 00:27:09.169 "num_base_bdevs_discovered": 1, 00:27:09.169 "num_base_bdevs_operational": 1, 00:27:09.169 "base_bdevs_list": [ 00:27:09.169 { 00:27:09.169 "name": null, 00:27:09.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:09.169 "is_configured": false, 00:27:09.169 "data_offset": 256, 00:27:09.169 "data_size": 7936 00:27:09.169 }, 00:27:09.169 { 00:27:09.169 "name": "BaseBdev2", 00:27:09.169 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:27:09.169 "is_configured": true, 00:27:09.169 "data_offset": 256, 00:27:09.169 "data_size": 7936 00:27:09.169 } 00:27:09.169 ] 00:27:09.169 }' 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:09.169 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:09.170 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:09.170 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:09.170 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:09.170 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:09.428 [2024-07-25 13:27:19.715095] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:09.428 [2024-07-25 13:27:19.715207] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:09.428 [2024-07-25 13:27:19.715221] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:09.428 request: 00:27:09.428 { 00:27:09.428 "base_bdev": "BaseBdev1", 00:27:09.428 "raid_bdev": "raid_bdev1", 00:27:09.428 "method": "bdev_raid_add_base_bdev", 00:27:09.428 "req_id": 1 00:27:09.428 } 00:27:09.428 Got JSON-RPC error response 00:27:09.428 response: 00:27:09.428 { 00:27:09.428 "code": -22, 00:27:09.428 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:09.428 } 00:27:09.428 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:27:09.428 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:09.428 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:09.428 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:09.428 13:27:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@793 -- # sleep 1 00:27:10.364 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:10.364 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:10.364 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:10.364 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:10.364 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:10.364 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:10.364 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.364 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.364 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.364 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.364 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.364 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.624 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:10.624 "name": "raid_bdev1", 00:27:10.624 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:27:10.624 "strip_size_kb": 0, 00:27:10.624 "state": "online", 00:27:10.624 "raid_level": "raid1", 00:27:10.624 "superblock": true, 00:27:10.624 "num_base_bdevs": 2, 00:27:10.624 "num_base_bdevs_discovered": 1, 00:27:10.624 "num_base_bdevs_operational": 1, 00:27:10.624 "base_bdevs_list": [ 00:27:10.624 { 00:27:10.624 "name": null, 00:27:10.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.624 "is_configured": false, 00:27:10.624 "data_offset": 256, 00:27:10.624 "data_size": 7936 00:27:10.624 }, 00:27:10.624 { 00:27:10.624 "name": "BaseBdev2", 00:27:10.624 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:27:10.624 "is_configured": true, 00:27:10.624 "data_offset": 256, 00:27:10.624 "data_size": 7936 00:27:10.624 } 00:27:10.624 ] 00:27:10.624 }' 00:27:10.624 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:10.624 13:27:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:11.191 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:11.192 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:11.192 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:11.192 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:11.192 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:11.192 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.192 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:11.451 "name": "raid_bdev1", 00:27:11.451 "uuid": "a3cb1fe2-e3ed-47c8-b8e2-fd8b41be6433", 00:27:11.451 "strip_size_kb": 0, 00:27:11.451 "state": "online", 00:27:11.451 "raid_level": "raid1", 00:27:11.451 "superblock": true, 00:27:11.451 "num_base_bdevs": 2, 00:27:11.451 "num_base_bdevs_discovered": 1, 00:27:11.451 "num_base_bdevs_operational": 1, 00:27:11.451 "base_bdevs_list": [ 00:27:11.451 { 00:27:11.451 "name": null, 00:27:11.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.451 "is_configured": false, 00:27:11.451 "data_offset": 256, 00:27:11.451 "data_size": 7936 00:27:11.451 }, 00:27:11.451 { 00:27:11.451 "name": "BaseBdev2", 00:27:11.451 "uuid": "7a2a5db5-b4ba-54b4-9fbc-f59a91d7f238", 00:27:11.451 "is_configured": true, 00:27:11.451 "data_offset": 256, 00:27:11.451 "data_size": 7936 00:27:11.451 } 00:27:11.451 ] 00:27:11.451 }' 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@798 -- # killprocess 999016 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 999016 ']' 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 999016 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 999016 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 999016' 00:27:11.451 killing process with pid 999016 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 999016 00:27:11.451 Received shutdown signal, test time was about 60.000000 seconds 00:27:11.451 00:27:11.451 Latency(us) 00:27:11.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.451 =================================================================================================================== 00:27:11.451 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:11.451 [2024-07-25 13:27:21.911265] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:11.451 [2024-07-25 13:27:21.911339] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:11.451 [2024-07-25 13:27:21.911376] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:11.451 [2024-07-25 13:27:21.911387] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb53b00 name raid_bdev1, state offline 00:27:11.451 13:27:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 999016 00:27:11.451 [2024-07-25 13:27:21.934653] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:11.710 13:27:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@800 -- # return 0 00:27:11.710 00:27:11.710 real 0m30.152s 00:27:11.710 user 0m46.570s 00:27:11.710 sys 0m4.941s 00:27:11.710 13:27:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:11.710 13:27:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:11.710 ************************************ 00:27:11.710 END TEST raid_rebuild_test_sb_4k 00:27:11.710 ************************************ 00:27:11.710 13:27:22 bdev_raid -- bdev/bdev_raid.sh@984 -- # base_malloc_params='-m 32' 00:27:11.710 13:27:22 bdev_raid -- bdev/bdev_raid.sh@985 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:27:11.710 13:27:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:11.710 13:27:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:11.710 13:27:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:11.969 ************************************ 00:27:11.969 START TEST raid_state_function_test_sb_md_separate 00:27:11.969 ************************************ 00:27:11.969 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:27:11.969 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:11.969 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:27:11.969 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:27:11.969 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:11.969 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:11.969 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:11.969 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:27:11.969 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:11.969 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:11.969 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:27:11.969 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=1004988 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1004988' 00:27:11.970 Process raid pid: 1004988 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 1004988 /var/tmp/spdk-raid.sock 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 1004988 ']' 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:11.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:11.970 13:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:11.970 [2024-07-25 13:27:22.278157] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:27:11.970 [2024-07-25 13:27:22.278215] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:11.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.970 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:11.970 [2024-07-25 13:27:22.409613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.229 [2024-07-25 13:27:22.496190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.229 [2024-07-25 13:27:22.549860] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:12.229 [2024-07-25 13:27:22.549886] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:12.797 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:12.797 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:27:12.797 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:13.055 [2024-07-25 13:27:23.371271] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:13.055 [2024-07-25 13:27:23.371306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:13.055 [2024-07-25 13:27:23.371316] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:13.055 [2024-07-25 13:27:23.371327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:13.055 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:13.055 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:13.055 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:13.055 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:13.055 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:13.055 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:13.055 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.055 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.055 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.055 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.055 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.055 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:13.314 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:13.314 "name": "Existed_Raid", 00:27:13.314 "uuid": "18ad96af-bf3b-433a-8dbb-d74832b78c50", 00:27:13.314 "strip_size_kb": 0, 00:27:13.314 "state": "configuring", 00:27:13.314 "raid_level": "raid1", 00:27:13.314 "superblock": true, 00:27:13.314 "num_base_bdevs": 2, 00:27:13.314 "num_base_bdevs_discovered": 0, 00:27:13.314 "num_base_bdevs_operational": 2, 00:27:13.314 "base_bdevs_list": [ 00:27:13.314 { 00:27:13.314 "name": "BaseBdev1", 00:27:13.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.314 "is_configured": false, 00:27:13.314 "data_offset": 0, 00:27:13.314 "data_size": 0 00:27:13.314 }, 00:27:13.314 { 00:27:13.314 "name": "BaseBdev2", 00:27:13.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.314 "is_configured": false, 00:27:13.314 "data_offset": 0, 00:27:13.314 "data_size": 0 00:27:13.314 } 00:27:13.314 ] 00:27:13.314 }' 00:27:13.314 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:13.314 13:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:13.882 13:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:14.141 [2024-07-25 13:27:24.401988] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:14.141 [2024-07-25 13:27:24.402018] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17dcf20 name Existed_Raid, state configuring 00:27:14.141 13:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:14.141 [2024-07-25 13:27:24.626596] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:14.141 [2024-07-25 13:27:24.626635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:14.141 [2024-07-25 13:27:24.626645] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:14.141 [2024-07-25 13:27:24.626655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:14.399 13:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:27:14.399 [2024-07-25 13:27:24.857152] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:14.399 BaseBdev1 00:27:14.399 13:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:14.399 13:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:27:14.399 13:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:14.399 13:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:27:14.399 13:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:14.399 13:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:14.399 13:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:14.657 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:14.915 [ 00:27:14.915 { 00:27:14.915 "name": "BaseBdev1", 00:27:14.915 "aliases": [ 00:27:14.915 "4e974a18-3d6a-498f-b094-774803524702" 00:27:14.915 ], 00:27:14.915 "product_name": "Malloc disk", 00:27:14.915 "block_size": 4096, 00:27:14.915 "num_blocks": 8192, 00:27:14.916 "uuid": "4e974a18-3d6a-498f-b094-774803524702", 00:27:14.916 "md_size": 32, 00:27:14.916 "md_interleave": false, 00:27:14.916 "dif_type": 0, 00:27:14.916 "assigned_rate_limits": { 00:27:14.916 "rw_ios_per_sec": 0, 00:27:14.916 "rw_mbytes_per_sec": 0, 00:27:14.916 "r_mbytes_per_sec": 0, 00:27:14.916 "w_mbytes_per_sec": 0 00:27:14.916 }, 00:27:14.916 "claimed": true, 00:27:14.916 "claim_type": "exclusive_write", 00:27:14.916 "zoned": false, 00:27:14.916 "supported_io_types": { 00:27:14.916 "read": true, 00:27:14.916 "write": true, 00:27:14.916 "unmap": true, 00:27:14.916 "flush": true, 00:27:14.916 "reset": true, 00:27:14.916 "nvme_admin": false, 00:27:14.916 "nvme_io": false, 00:27:14.916 "nvme_io_md": false, 00:27:14.916 "write_zeroes": true, 00:27:14.916 "zcopy": true, 00:27:14.916 "get_zone_info": false, 00:27:14.916 "zone_management": false, 00:27:14.916 "zone_append": false, 00:27:14.916 "compare": false, 00:27:14.916 "compare_and_write": false, 00:27:14.916 "abort": true, 00:27:14.916 "seek_hole": false, 00:27:14.916 "seek_data": false, 00:27:14.916 "copy": true, 00:27:14.916 "nvme_iov_md": false 00:27:14.916 }, 00:27:14.916 "memory_domains": [ 00:27:14.916 { 00:27:14.916 "dma_device_id": "system", 00:27:14.916 "dma_device_type": 1 00:27:14.916 }, 00:27:14.916 { 00:27:14.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.916 "dma_device_type": 2 00:27:14.916 } 00:27:14.916 ], 00:27:14.916 "driver_specific": {} 00:27:14.916 } 00:27:14.916 ] 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.916 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:15.175 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:15.175 "name": "Existed_Raid", 00:27:15.175 "uuid": "980d8c77-f611-4731-aeab-1a3489c8c817", 00:27:15.175 "strip_size_kb": 0, 00:27:15.175 "state": "configuring", 00:27:15.175 "raid_level": "raid1", 00:27:15.175 "superblock": true, 00:27:15.175 "num_base_bdevs": 2, 00:27:15.175 "num_base_bdevs_discovered": 1, 00:27:15.175 "num_base_bdevs_operational": 2, 00:27:15.175 "base_bdevs_list": [ 00:27:15.175 { 00:27:15.175 "name": "BaseBdev1", 00:27:15.175 "uuid": "4e974a18-3d6a-498f-b094-774803524702", 00:27:15.175 "is_configured": true, 00:27:15.175 "data_offset": 256, 00:27:15.175 "data_size": 7936 00:27:15.175 }, 00:27:15.175 { 00:27:15.175 "name": "BaseBdev2", 00:27:15.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.175 "is_configured": false, 00:27:15.175 "data_offset": 0, 00:27:15.175 "data_size": 0 00:27:15.175 } 00:27:15.175 ] 00:27:15.175 }' 00:27:15.175 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:15.175 13:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:15.743 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:16.002 [2024-07-25 13:27:26.353079] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:16.002 [2024-07-25 13:27:26.353111] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17dc810 name Existed_Raid, state configuring 00:27:16.002 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:16.260 [2024-07-25 13:27:26.581718] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:16.260 [2024-07-25 13:27:26.583025] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:16.260 [2024-07-25 13:27:26.583054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:16.260 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:16.260 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:16.260 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:16.260 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:16.260 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:16.260 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:16.260 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:16.260 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:16.261 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:16.261 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:16.261 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:16.261 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:16.261 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.261 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:16.519 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:16.519 "name": "Existed_Raid", 00:27:16.519 "uuid": "8730520a-210b-4b38-a9b8-02a6206e63c1", 00:27:16.519 "strip_size_kb": 0, 00:27:16.519 "state": "configuring", 00:27:16.519 "raid_level": "raid1", 00:27:16.519 "superblock": true, 00:27:16.519 "num_base_bdevs": 2, 00:27:16.519 "num_base_bdevs_discovered": 1, 00:27:16.519 "num_base_bdevs_operational": 2, 00:27:16.519 "base_bdevs_list": [ 00:27:16.519 { 00:27:16.519 "name": "BaseBdev1", 00:27:16.519 "uuid": "4e974a18-3d6a-498f-b094-774803524702", 00:27:16.519 "is_configured": true, 00:27:16.519 "data_offset": 256, 00:27:16.519 "data_size": 7936 00:27:16.519 }, 00:27:16.519 { 00:27:16.519 "name": "BaseBdev2", 00:27:16.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.519 "is_configured": false, 00:27:16.519 "data_offset": 0, 00:27:16.519 "data_size": 0 00:27:16.519 } 00:27:16.519 ] 00:27:16.519 }' 00:27:16.519 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:16.519 13:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:17.150 13:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:27:17.409 [2024-07-25 13:27:27.648233] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:17.409 [2024-07-25 13:27:27.648351] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x17de790 00:27:17.409 [2024-07-25 13:27:27.648363] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:17.409 [2024-07-25 13:27:27.648417] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17de1d0 00:27:17.409 [2024-07-25 13:27:27.648505] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17de790 00:27:17.409 [2024-07-25 13:27:27.648514] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17de790 00:27:17.409 [2024-07-25 13:27:27.648571] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:17.409 BaseBdev2 00:27:17.409 13:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:17.409 13:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:17.409 13:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:17.409 13:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:27:17.409 13:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:17.409 13:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:17.409 13:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:17.409 13:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:17.668 [ 00:27:17.668 { 00:27:17.668 "name": "BaseBdev2", 00:27:17.668 "aliases": [ 00:27:17.668 "06837bcd-709c-4fba-990a-46ac139b8b84" 00:27:17.668 ], 00:27:17.668 "product_name": "Malloc disk", 00:27:17.668 "block_size": 4096, 00:27:17.668 "num_blocks": 8192, 00:27:17.668 "uuid": "06837bcd-709c-4fba-990a-46ac139b8b84", 00:27:17.668 "md_size": 32, 00:27:17.668 "md_interleave": false, 00:27:17.668 "dif_type": 0, 00:27:17.668 "assigned_rate_limits": { 00:27:17.668 "rw_ios_per_sec": 0, 00:27:17.668 "rw_mbytes_per_sec": 0, 00:27:17.668 "r_mbytes_per_sec": 0, 00:27:17.668 "w_mbytes_per_sec": 0 00:27:17.668 }, 00:27:17.668 "claimed": true, 00:27:17.668 "claim_type": "exclusive_write", 00:27:17.668 "zoned": false, 00:27:17.668 "supported_io_types": { 00:27:17.668 "read": true, 00:27:17.668 "write": true, 00:27:17.668 "unmap": true, 00:27:17.668 "flush": true, 00:27:17.668 "reset": true, 00:27:17.668 "nvme_admin": false, 00:27:17.668 "nvme_io": false, 00:27:17.668 "nvme_io_md": false, 00:27:17.668 "write_zeroes": true, 00:27:17.668 "zcopy": true, 00:27:17.668 "get_zone_info": false, 00:27:17.668 "zone_management": false, 00:27:17.668 "zone_append": false, 00:27:17.668 "compare": false, 00:27:17.668 "compare_and_write": false, 00:27:17.668 "abort": true, 00:27:17.668 "seek_hole": false, 00:27:17.668 "seek_data": false, 00:27:17.668 "copy": true, 00:27:17.668 "nvme_iov_md": false 00:27:17.668 }, 00:27:17.668 "memory_domains": [ 00:27:17.668 { 00:27:17.668 "dma_device_id": "system", 00:27:17.668 "dma_device_type": 1 00:27:17.668 }, 00:27:17.668 { 00:27:17.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:17.668 "dma_device_type": 2 00:27:17.668 } 00:27:17.668 ], 00:27:17.668 "driver_specific": {} 00:27:17.668 } 00:27:17.668 ] 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.668 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:17.927 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:17.927 "name": "Existed_Raid", 00:27:17.927 "uuid": "8730520a-210b-4b38-a9b8-02a6206e63c1", 00:27:17.927 "strip_size_kb": 0, 00:27:17.927 "state": "online", 00:27:17.927 "raid_level": "raid1", 00:27:17.927 "superblock": true, 00:27:17.927 "num_base_bdevs": 2, 00:27:17.927 "num_base_bdevs_discovered": 2, 00:27:17.927 "num_base_bdevs_operational": 2, 00:27:17.927 "base_bdevs_list": [ 00:27:17.927 { 00:27:17.927 "name": "BaseBdev1", 00:27:17.927 "uuid": "4e974a18-3d6a-498f-b094-774803524702", 00:27:17.927 "is_configured": true, 00:27:17.927 "data_offset": 256, 00:27:17.927 "data_size": 7936 00:27:17.927 }, 00:27:17.927 { 00:27:17.927 "name": "BaseBdev2", 00:27:17.927 "uuid": "06837bcd-709c-4fba-990a-46ac139b8b84", 00:27:17.927 "is_configured": true, 00:27:17.927 "data_offset": 256, 00:27:17.927 "data_size": 7936 00:27:17.927 } 00:27:17.927 ] 00:27:17.927 }' 00:27:17.927 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:17.927 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:18.495 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:18.495 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:18.496 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:18.496 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:18.496 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:18.496 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:18.496 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:18.496 13:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:18.759 [2024-07-25 13:27:29.100340] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:18.759 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:18.759 "name": "Existed_Raid", 00:27:18.759 "aliases": [ 00:27:18.759 "8730520a-210b-4b38-a9b8-02a6206e63c1" 00:27:18.759 ], 00:27:18.759 "product_name": "Raid Volume", 00:27:18.759 "block_size": 4096, 00:27:18.759 "num_blocks": 7936, 00:27:18.759 "uuid": "8730520a-210b-4b38-a9b8-02a6206e63c1", 00:27:18.759 "md_size": 32, 00:27:18.759 "md_interleave": false, 00:27:18.759 "dif_type": 0, 00:27:18.759 "assigned_rate_limits": { 00:27:18.759 "rw_ios_per_sec": 0, 00:27:18.759 "rw_mbytes_per_sec": 0, 00:27:18.759 "r_mbytes_per_sec": 0, 00:27:18.759 "w_mbytes_per_sec": 0 00:27:18.759 }, 00:27:18.759 "claimed": false, 00:27:18.759 "zoned": false, 00:27:18.759 "supported_io_types": { 00:27:18.759 "read": true, 00:27:18.759 "write": true, 00:27:18.759 "unmap": false, 00:27:18.759 "flush": false, 00:27:18.759 "reset": true, 00:27:18.759 "nvme_admin": false, 00:27:18.759 "nvme_io": false, 00:27:18.759 "nvme_io_md": false, 00:27:18.759 "write_zeroes": true, 00:27:18.759 "zcopy": false, 00:27:18.759 "get_zone_info": false, 00:27:18.759 "zone_management": false, 00:27:18.759 "zone_append": false, 00:27:18.759 "compare": false, 00:27:18.759 "compare_and_write": false, 00:27:18.759 "abort": false, 00:27:18.759 "seek_hole": false, 00:27:18.759 "seek_data": false, 00:27:18.759 "copy": false, 00:27:18.759 "nvme_iov_md": false 00:27:18.759 }, 00:27:18.759 "memory_domains": [ 00:27:18.759 { 00:27:18.759 "dma_device_id": "system", 00:27:18.759 "dma_device_type": 1 00:27:18.759 }, 00:27:18.759 { 00:27:18.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.759 "dma_device_type": 2 00:27:18.759 }, 00:27:18.759 { 00:27:18.759 "dma_device_id": "system", 00:27:18.759 "dma_device_type": 1 00:27:18.759 }, 00:27:18.759 { 00:27:18.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:18.759 "dma_device_type": 2 00:27:18.759 } 00:27:18.759 ], 00:27:18.759 "driver_specific": { 00:27:18.759 "raid": { 00:27:18.759 "uuid": "8730520a-210b-4b38-a9b8-02a6206e63c1", 00:27:18.759 "strip_size_kb": 0, 00:27:18.759 "state": "online", 00:27:18.759 "raid_level": "raid1", 00:27:18.759 "superblock": true, 00:27:18.759 "num_base_bdevs": 2, 00:27:18.759 "num_base_bdevs_discovered": 2, 00:27:18.759 "num_base_bdevs_operational": 2, 00:27:18.759 "base_bdevs_list": [ 00:27:18.759 { 00:27:18.759 "name": "BaseBdev1", 00:27:18.759 "uuid": "4e974a18-3d6a-498f-b094-774803524702", 00:27:18.759 "is_configured": true, 00:27:18.759 "data_offset": 256, 00:27:18.759 "data_size": 7936 00:27:18.759 }, 00:27:18.759 { 00:27:18.759 "name": "BaseBdev2", 00:27:18.759 "uuid": "06837bcd-709c-4fba-990a-46ac139b8b84", 00:27:18.759 "is_configured": true, 00:27:18.759 "data_offset": 256, 00:27:18.759 "data_size": 7936 00:27:18.759 } 00:27:18.759 ] 00:27:18.759 } 00:27:18.759 } 00:27:18.759 }' 00:27:18.759 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:18.759 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:18.759 BaseBdev2' 00:27:18.759 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:18.759 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:18.759 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:19.017 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:19.017 "name": "BaseBdev1", 00:27:19.017 "aliases": [ 00:27:19.017 "4e974a18-3d6a-498f-b094-774803524702" 00:27:19.017 ], 00:27:19.017 "product_name": "Malloc disk", 00:27:19.017 "block_size": 4096, 00:27:19.017 "num_blocks": 8192, 00:27:19.017 "uuid": "4e974a18-3d6a-498f-b094-774803524702", 00:27:19.017 "md_size": 32, 00:27:19.017 "md_interleave": false, 00:27:19.017 "dif_type": 0, 00:27:19.017 "assigned_rate_limits": { 00:27:19.017 "rw_ios_per_sec": 0, 00:27:19.017 "rw_mbytes_per_sec": 0, 00:27:19.017 "r_mbytes_per_sec": 0, 00:27:19.017 "w_mbytes_per_sec": 0 00:27:19.017 }, 00:27:19.017 "claimed": true, 00:27:19.017 "claim_type": "exclusive_write", 00:27:19.017 "zoned": false, 00:27:19.017 "supported_io_types": { 00:27:19.017 "read": true, 00:27:19.017 "write": true, 00:27:19.017 "unmap": true, 00:27:19.017 "flush": true, 00:27:19.017 "reset": true, 00:27:19.017 "nvme_admin": false, 00:27:19.017 "nvme_io": false, 00:27:19.017 "nvme_io_md": false, 00:27:19.017 "write_zeroes": true, 00:27:19.017 "zcopy": true, 00:27:19.017 "get_zone_info": false, 00:27:19.017 "zone_management": false, 00:27:19.017 "zone_append": false, 00:27:19.017 "compare": false, 00:27:19.017 "compare_and_write": false, 00:27:19.017 "abort": true, 00:27:19.017 "seek_hole": false, 00:27:19.017 "seek_data": false, 00:27:19.017 "copy": true, 00:27:19.017 "nvme_iov_md": false 00:27:19.017 }, 00:27:19.017 "memory_domains": [ 00:27:19.017 { 00:27:19.017 "dma_device_id": "system", 00:27:19.017 "dma_device_type": 1 00:27:19.017 }, 00:27:19.017 { 00:27:19.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:19.017 "dma_device_type": 2 00:27:19.017 } 00:27:19.017 ], 00:27:19.017 "driver_specific": {} 00:27:19.017 }' 00:27:19.017 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:19.017 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:19.017 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:19.017 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:19.275 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:19.275 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:19.275 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:19.275 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:19.275 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:19.275 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:19.275 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:19.275 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:19.275 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:19.275 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:19.275 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:19.533 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:19.533 "name": "BaseBdev2", 00:27:19.533 "aliases": [ 00:27:19.533 "06837bcd-709c-4fba-990a-46ac139b8b84" 00:27:19.533 ], 00:27:19.533 "product_name": "Malloc disk", 00:27:19.533 "block_size": 4096, 00:27:19.533 "num_blocks": 8192, 00:27:19.533 "uuid": "06837bcd-709c-4fba-990a-46ac139b8b84", 00:27:19.533 "md_size": 32, 00:27:19.533 "md_interleave": false, 00:27:19.533 "dif_type": 0, 00:27:19.533 "assigned_rate_limits": { 00:27:19.533 "rw_ios_per_sec": 0, 00:27:19.533 "rw_mbytes_per_sec": 0, 00:27:19.533 "r_mbytes_per_sec": 0, 00:27:19.533 "w_mbytes_per_sec": 0 00:27:19.533 }, 00:27:19.533 "claimed": true, 00:27:19.533 "claim_type": "exclusive_write", 00:27:19.533 "zoned": false, 00:27:19.533 "supported_io_types": { 00:27:19.533 "read": true, 00:27:19.533 "write": true, 00:27:19.533 "unmap": true, 00:27:19.533 "flush": true, 00:27:19.533 "reset": true, 00:27:19.533 "nvme_admin": false, 00:27:19.533 "nvme_io": false, 00:27:19.533 "nvme_io_md": false, 00:27:19.533 "write_zeroes": true, 00:27:19.533 "zcopy": true, 00:27:19.533 "get_zone_info": false, 00:27:19.533 "zone_management": false, 00:27:19.533 "zone_append": false, 00:27:19.533 "compare": false, 00:27:19.533 "compare_and_write": false, 00:27:19.533 "abort": true, 00:27:19.533 "seek_hole": false, 00:27:19.533 "seek_data": false, 00:27:19.533 "copy": true, 00:27:19.533 "nvme_iov_md": false 00:27:19.533 }, 00:27:19.533 "memory_domains": [ 00:27:19.533 { 00:27:19.533 "dma_device_id": "system", 00:27:19.533 "dma_device_type": 1 00:27:19.533 }, 00:27:19.533 { 00:27:19.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:19.533 "dma_device_type": 2 00:27:19.533 } 00:27:19.533 ], 00:27:19.533 "driver_specific": {} 00:27:19.533 }' 00:27:19.533 13:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:19.791 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:19.791 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:19.791 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:19.791 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:19.791 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:19.791 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:19.791 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:19.791 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:19.791 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:19.791 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:20.049 [2024-07-25 13:27:30.491933] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.049 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:20.307 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:20.307 "name": "Existed_Raid", 00:27:20.307 "uuid": "8730520a-210b-4b38-a9b8-02a6206e63c1", 00:27:20.307 "strip_size_kb": 0, 00:27:20.307 "state": "online", 00:27:20.307 "raid_level": "raid1", 00:27:20.307 "superblock": true, 00:27:20.307 "num_base_bdevs": 2, 00:27:20.307 "num_base_bdevs_discovered": 1, 00:27:20.307 "num_base_bdevs_operational": 1, 00:27:20.307 "base_bdevs_list": [ 00:27:20.307 { 00:27:20.307 "name": null, 00:27:20.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:20.307 "is_configured": false, 00:27:20.307 "data_offset": 256, 00:27:20.307 "data_size": 7936 00:27:20.307 }, 00:27:20.307 { 00:27:20.307 "name": "BaseBdev2", 00:27:20.307 "uuid": "06837bcd-709c-4fba-990a-46ac139b8b84", 00:27:20.307 "is_configured": true, 00:27:20.307 "data_offset": 256, 00:27:20.307 "data_size": 7936 00:27:20.307 } 00:27:20.307 ] 00:27:20.307 }' 00:27:20.307 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:20.307 13:27:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:20.872 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:20.872 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:20.872 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.872 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:21.130 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:21.130 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:21.130 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:21.388 [2024-07-25 13:27:31.717181] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:21.388 [2024-07-25 13:27:31.717253] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:21.388 [2024-07-25 13:27:31.728052] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:21.388 [2024-07-25 13:27:31.728083] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:21.388 [2024-07-25 13:27:31.728094] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17de790 name Existed_Raid, state offline 00:27:21.388 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:21.388 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:21.388 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.388 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:21.647 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:21.647 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:21.647 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:27:21.647 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 1004988 00:27:21.647 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 1004988 ']' 00:27:21.647 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 1004988 00:27:21.647 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:27:21.647 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:21.647 13:27:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1004988 00:27:21.647 13:27:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:21.647 13:27:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:21.647 13:27:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1004988' 00:27:21.647 killing process with pid 1004988 00:27:21.647 13:27:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 1004988 00:27:21.647 [2024-07-25 13:27:32.025002] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:21.647 13:27:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 1004988 00:27:21.647 [2024-07-25 13:27:32.025857] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:21.905 13:27:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:27:21.906 00:27:21.906 real 0m10.003s 00:27:21.906 user 0m17.741s 00:27:21.906 sys 0m1.907s 00:27:21.906 13:27:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:21.906 13:27:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:21.906 ************************************ 00:27:21.906 END TEST raid_state_function_test_sb_md_separate 00:27:21.906 ************************************ 00:27:21.906 13:27:32 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:27:21.906 13:27:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:21.906 13:27:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:21.906 13:27:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:21.906 ************************************ 00:27:21.906 START TEST raid_superblock_test_md_separate 00:27:21.906 ************************************ 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@414 -- # local strip_size 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@427 -- # raid_pid=1006904 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@428 -- # waitforlisten 1006904 /var/tmp/spdk-raid.sock 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 1006904 ']' 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:21.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:21.906 13:27:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:21.906 [2024-07-25 13:27:32.356333] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:27:21.906 [2024-07-25 13:27:32.356388] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006904 ] 00:27:22.164 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.164 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:22.164 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.164 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:22.164 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.164 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:22.164 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.164 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:22.164 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.164 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:22.164 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.164 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:22.164 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.164 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:22.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.165 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:22.165 [2024-07-25 13:27:32.489990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.165 [2024-07-25 13:27:32.576552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.165 [2024-07-25 13:27:32.637417] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:22.165 [2024-07-25 13:27:32.637458] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:23.096 13:27:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:23.096 13:27:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:27:23.096 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:27:23.096 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:27:23.096 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:27:23.096 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:27:23.096 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:23.096 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:23.096 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:27:23.096 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:23.096 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:27:23.096 malloc1 00:27:23.096 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:23.352 [2024-07-25 13:27:33.703500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:23.352 [2024-07-25 13:27:33.703542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.352 [2024-07-25 13:27:33.703560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x236fd00 00:27:23.352 [2024-07-25 13:27:33.703572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.352 [2024-07-25 13:27:33.704969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.352 [2024-07-25 13:27:33.704994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:23.352 pt1 00:27:23.352 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:27:23.352 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:27:23.352 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:27:23.352 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:27:23.352 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:23.352 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:23.352 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:27:23.352 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:23.352 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:27:23.610 malloc2 00:27:23.610 13:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:23.868 [2024-07-25 13:27:34.157826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:23.868 [2024-07-25 13:27:34.157863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.868 [2024-07-25 13:27:34.157879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2482bc0 00:27:23.868 [2024-07-25 13:27:34.157890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.868 [2024-07-25 13:27:34.159074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.868 [2024-07-25 13:27:34.159099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:23.868 pt2 00:27:23.868 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:27:23.868 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:27:23.868 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:27:24.195 [2024-07-25 13:27:34.370394] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:24.195 [2024-07-25 13:27:34.371507] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:24.195 [2024-07-25 13:27:34.371622] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2370400 00:27:24.195 [2024-07-25 13:27:34.371634] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:24.195 [2024-07-25 13:27:34.371700] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24832f0 00:27:24.195 [2024-07-25 13:27:34.371804] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2370400 00:27:24.195 [2024-07-25 13:27:34.371813] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2370400 00:27:24.195 [2024-07-25 13:27:34.371887] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:24.196 "name": "raid_bdev1", 00:27:24.196 "uuid": "24838a5a-701a-4839-9fdc-83297c37dcb4", 00:27:24.196 "strip_size_kb": 0, 00:27:24.196 "state": "online", 00:27:24.196 "raid_level": "raid1", 00:27:24.196 "superblock": true, 00:27:24.196 "num_base_bdevs": 2, 00:27:24.196 "num_base_bdevs_discovered": 2, 00:27:24.196 "num_base_bdevs_operational": 2, 00:27:24.196 "base_bdevs_list": [ 00:27:24.196 { 00:27:24.196 "name": "pt1", 00:27:24.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:24.196 "is_configured": true, 00:27:24.196 "data_offset": 256, 00:27:24.196 "data_size": 7936 00:27:24.196 }, 00:27:24.196 { 00:27:24.196 "name": "pt2", 00:27:24.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:24.196 "is_configured": true, 00:27:24.196 "data_offset": 256, 00:27:24.196 "data_size": 7936 00:27:24.196 } 00:27:24.196 ] 00:27:24.196 }' 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:24.196 13:27:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:24.760 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:27:24.760 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:24.760 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:24.760 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:24.760 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:24.760 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:24.760 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:24.760 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:25.018 [2024-07-25 13:27:35.421425] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:25.018 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:25.018 "name": "raid_bdev1", 00:27:25.018 "aliases": [ 00:27:25.018 "24838a5a-701a-4839-9fdc-83297c37dcb4" 00:27:25.018 ], 00:27:25.018 "product_name": "Raid Volume", 00:27:25.018 "block_size": 4096, 00:27:25.018 "num_blocks": 7936, 00:27:25.018 "uuid": "24838a5a-701a-4839-9fdc-83297c37dcb4", 00:27:25.018 "md_size": 32, 00:27:25.018 "md_interleave": false, 00:27:25.018 "dif_type": 0, 00:27:25.018 "assigned_rate_limits": { 00:27:25.018 "rw_ios_per_sec": 0, 00:27:25.018 "rw_mbytes_per_sec": 0, 00:27:25.018 "r_mbytes_per_sec": 0, 00:27:25.018 "w_mbytes_per_sec": 0 00:27:25.018 }, 00:27:25.018 "claimed": false, 00:27:25.018 "zoned": false, 00:27:25.018 "supported_io_types": { 00:27:25.018 "read": true, 00:27:25.018 "write": true, 00:27:25.018 "unmap": false, 00:27:25.018 "flush": false, 00:27:25.018 "reset": true, 00:27:25.018 "nvme_admin": false, 00:27:25.018 "nvme_io": false, 00:27:25.018 "nvme_io_md": false, 00:27:25.018 "write_zeroes": true, 00:27:25.018 "zcopy": false, 00:27:25.018 "get_zone_info": false, 00:27:25.018 "zone_management": false, 00:27:25.018 "zone_append": false, 00:27:25.018 "compare": false, 00:27:25.018 "compare_and_write": false, 00:27:25.018 "abort": false, 00:27:25.018 "seek_hole": false, 00:27:25.018 "seek_data": false, 00:27:25.018 "copy": false, 00:27:25.018 "nvme_iov_md": false 00:27:25.018 }, 00:27:25.018 "memory_domains": [ 00:27:25.018 { 00:27:25.018 "dma_device_id": "system", 00:27:25.018 "dma_device_type": 1 00:27:25.018 }, 00:27:25.018 { 00:27:25.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.018 "dma_device_type": 2 00:27:25.018 }, 00:27:25.018 { 00:27:25.018 "dma_device_id": "system", 00:27:25.018 "dma_device_type": 1 00:27:25.018 }, 00:27:25.018 { 00:27:25.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.018 "dma_device_type": 2 00:27:25.018 } 00:27:25.018 ], 00:27:25.018 "driver_specific": { 00:27:25.018 "raid": { 00:27:25.018 "uuid": "24838a5a-701a-4839-9fdc-83297c37dcb4", 00:27:25.018 "strip_size_kb": 0, 00:27:25.018 "state": "online", 00:27:25.018 "raid_level": "raid1", 00:27:25.018 "superblock": true, 00:27:25.018 "num_base_bdevs": 2, 00:27:25.018 "num_base_bdevs_discovered": 2, 00:27:25.018 "num_base_bdevs_operational": 2, 00:27:25.018 "base_bdevs_list": [ 00:27:25.018 { 00:27:25.018 "name": "pt1", 00:27:25.018 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:25.018 "is_configured": true, 00:27:25.018 "data_offset": 256, 00:27:25.018 "data_size": 7936 00:27:25.018 }, 00:27:25.018 { 00:27:25.018 "name": "pt2", 00:27:25.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:25.018 "is_configured": true, 00:27:25.018 "data_offset": 256, 00:27:25.018 "data_size": 7936 00:27:25.018 } 00:27:25.018 ] 00:27:25.018 } 00:27:25.018 } 00:27:25.018 }' 00:27:25.018 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:25.018 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:25.018 pt2' 00:27:25.018 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:25.018 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:25.018 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:25.277 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:25.277 "name": "pt1", 00:27:25.277 "aliases": [ 00:27:25.277 "00000000-0000-0000-0000-000000000001" 00:27:25.277 ], 00:27:25.277 "product_name": "passthru", 00:27:25.277 "block_size": 4096, 00:27:25.277 "num_blocks": 8192, 00:27:25.277 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:25.277 "md_size": 32, 00:27:25.277 "md_interleave": false, 00:27:25.277 "dif_type": 0, 00:27:25.277 "assigned_rate_limits": { 00:27:25.277 "rw_ios_per_sec": 0, 00:27:25.277 "rw_mbytes_per_sec": 0, 00:27:25.277 "r_mbytes_per_sec": 0, 00:27:25.277 "w_mbytes_per_sec": 0 00:27:25.277 }, 00:27:25.277 "claimed": true, 00:27:25.277 "claim_type": "exclusive_write", 00:27:25.277 "zoned": false, 00:27:25.277 "supported_io_types": { 00:27:25.277 "read": true, 00:27:25.277 "write": true, 00:27:25.277 "unmap": true, 00:27:25.277 "flush": true, 00:27:25.277 "reset": true, 00:27:25.277 "nvme_admin": false, 00:27:25.277 "nvme_io": false, 00:27:25.277 "nvme_io_md": false, 00:27:25.277 "write_zeroes": true, 00:27:25.277 "zcopy": true, 00:27:25.277 "get_zone_info": false, 00:27:25.277 "zone_management": false, 00:27:25.277 "zone_append": false, 00:27:25.277 "compare": false, 00:27:25.277 "compare_and_write": false, 00:27:25.277 "abort": true, 00:27:25.277 "seek_hole": false, 00:27:25.277 "seek_data": false, 00:27:25.277 "copy": true, 00:27:25.277 "nvme_iov_md": false 00:27:25.277 }, 00:27:25.277 "memory_domains": [ 00:27:25.277 { 00:27:25.277 "dma_device_id": "system", 00:27:25.277 "dma_device_type": 1 00:27:25.277 }, 00:27:25.277 { 00:27:25.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.277 "dma_device_type": 2 00:27:25.277 } 00:27:25.277 ], 00:27:25.277 "driver_specific": { 00:27:25.277 "passthru": { 00:27:25.277 "name": "pt1", 00:27:25.277 "base_bdev_name": "malloc1" 00:27:25.277 } 00:27:25.277 } 00:27:25.277 }' 00:27:25.277 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:25.277 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:25.535 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:25.535 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:25.535 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:25.535 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:25.535 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:25.535 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:25.535 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:25.535 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:25.535 13:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:25.794 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:25.794 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:25.794 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:25.794 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:25.794 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:25.794 "name": "pt2", 00:27:25.794 "aliases": [ 00:27:25.794 "00000000-0000-0000-0000-000000000002" 00:27:25.794 ], 00:27:25.794 "product_name": "passthru", 00:27:25.794 "block_size": 4096, 00:27:25.794 "num_blocks": 8192, 00:27:25.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:25.794 "md_size": 32, 00:27:25.794 "md_interleave": false, 00:27:25.794 "dif_type": 0, 00:27:25.794 "assigned_rate_limits": { 00:27:25.794 "rw_ios_per_sec": 0, 00:27:25.794 "rw_mbytes_per_sec": 0, 00:27:25.794 "r_mbytes_per_sec": 0, 00:27:25.794 "w_mbytes_per_sec": 0 00:27:25.794 }, 00:27:25.794 "claimed": true, 00:27:25.794 "claim_type": "exclusive_write", 00:27:25.794 "zoned": false, 00:27:25.794 "supported_io_types": { 00:27:25.794 "read": true, 00:27:25.794 "write": true, 00:27:25.794 "unmap": true, 00:27:25.794 "flush": true, 00:27:25.794 "reset": true, 00:27:25.794 "nvme_admin": false, 00:27:25.794 "nvme_io": false, 00:27:25.794 "nvme_io_md": false, 00:27:25.794 "write_zeroes": true, 00:27:25.794 "zcopy": true, 00:27:25.794 "get_zone_info": false, 00:27:25.794 "zone_management": false, 00:27:25.794 "zone_append": false, 00:27:25.794 "compare": false, 00:27:25.794 "compare_and_write": false, 00:27:25.794 "abort": true, 00:27:25.794 "seek_hole": false, 00:27:25.794 "seek_data": false, 00:27:25.794 "copy": true, 00:27:25.794 "nvme_iov_md": false 00:27:25.794 }, 00:27:25.794 "memory_domains": [ 00:27:25.794 { 00:27:25.794 "dma_device_id": "system", 00:27:25.794 "dma_device_type": 1 00:27:25.794 }, 00:27:25.794 { 00:27:25.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.794 "dma_device_type": 2 00:27:25.794 } 00:27:25.794 ], 00:27:25.794 "driver_specific": { 00:27:25.794 "passthru": { 00:27:25.794 "name": "pt2", 00:27:25.794 "base_bdev_name": "malloc2" 00:27:25.794 } 00:27:25.794 } 00:27:25.794 }' 00:27:25.794 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:26.052 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:26.052 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:26.052 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:26.052 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:26.052 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:26.052 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:26.052 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:26.052 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:26.052 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:26.052 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:26.309 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:26.309 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:26.309 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:27:26.309 [2024-07-25 13:27:36.772960] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:26.309 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=24838a5a-701a-4839-9fdc-83297c37dcb4 00:27:26.309 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' -z 24838a5a-701a-4839-9fdc-83297c37dcb4 ']' 00:27:26.309 13:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:26.566 [2024-07-25 13:27:37.001324] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:26.566 [2024-07-25 13:27:37.001342] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:26.566 [2024-07-25 13:27:37.001388] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:26.566 [2024-07-25 13:27:37.001433] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:26.566 [2024-07-25 13:27:37.001444] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2370400 name raid_bdev1, state offline 00:27:26.566 13:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.566 13:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:27:26.823 13:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:27:26.823 13:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:27:26.823 13:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:27:26.823 13:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:27.080 13:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:27:27.081 13:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:27.339 13:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:27.339 13:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:27.597 13:27:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:27.597 [2024-07-25 13:27:38.064070] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:27.597 [2024-07-25 13:27:38.065336] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:27.597 [2024-07-25 13:27:38.065387] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:27.598 [2024-07-25 13:27:38.065424] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:27.598 [2024-07-25 13:27:38.065441] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:27.598 [2024-07-25 13:27:38.065450] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2486600 name raid_bdev1, state configuring 00:27:27.598 request: 00:27:27.598 { 00:27:27.598 "name": "raid_bdev1", 00:27:27.598 "raid_level": "raid1", 00:27:27.598 "base_bdevs": [ 00:27:27.598 "malloc1", 00:27:27.598 "malloc2" 00:27:27.598 ], 00:27:27.598 "superblock": false, 00:27:27.598 "method": "bdev_raid_create", 00:27:27.598 "req_id": 1 00:27:27.598 } 00:27:27.598 Got JSON-RPC error response 00:27:27.598 response: 00:27:27.598 { 00:27:27.598 "code": -17, 00:27:27.598 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:27.598 } 00:27:27.598 13:27:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:27:27.598 13:27:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:27.856 13:27:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:27.856 13:27:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:27.856 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:27:27.856 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.856 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:27:27.856 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:27:27.856 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:28.115 [2024-07-25 13:27:38.525244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:28.115 [2024-07-25 13:27:38.525279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.115 [2024-07-25 13:27:38.525293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2486600 00:27:28.115 [2024-07-25 13:27:38.525305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.115 [2024-07-25 13:27:38.526498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.115 [2024-07-25 13:27:38.526521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:28.115 [2024-07-25 13:27:38.526559] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:28.115 [2024-07-25 13:27:38.526580] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:28.115 pt1 00:27:28.115 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:28.115 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:28.115 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:28.115 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:28.115 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:28.115 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:28.115 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:28.115 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:28.115 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:28.115 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:28.115 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.115 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.374 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:28.374 "name": "raid_bdev1", 00:27:28.374 "uuid": "24838a5a-701a-4839-9fdc-83297c37dcb4", 00:27:28.374 "strip_size_kb": 0, 00:27:28.374 "state": "configuring", 00:27:28.374 "raid_level": "raid1", 00:27:28.374 "superblock": true, 00:27:28.374 "num_base_bdevs": 2, 00:27:28.374 "num_base_bdevs_discovered": 1, 00:27:28.374 "num_base_bdevs_operational": 2, 00:27:28.374 "base_bdevs_list": [ 00:27:28.374 { 00:27:28.374 "name": "pt1", 00:27:28.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:28.374 "is_configured": true, 00:27:28.374 "data_offset": 256, 00:27:28.374 "data_size": 7936 00:27:28.374 }, 00:27:28.374 { 00:27:28.374 "name": null, 00:27:28.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:28.374 "is_configured": false, 00:27:28.374 "data_offset": 256, 00:27:28.374 "data_size": 7936 00:27:28.374 } 00:27:28.374 ] 00:27:28.374 }' 00:27:28.374 13:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:28.374 13:27:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.941 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:27:28.941 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:27:28.941 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:27:28.941 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:29.199 [2024-07-25 13:27:39.527895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:29.199 [2024-07-25 13:27:39.527936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.199 [2024-07-25 13:27:39.527953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2485a10 00:27:29.199 [2024-07-25 13:27:39.527964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.199 [2024-07-25 13:27:39.528147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.199 [2024-07-25 13:27:39.528163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:29.199 [2024-07-25 13:27:39.528201] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:29.199 [2024-07-25 13:27:39.528218] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:29.199 [2024-07-25 13:27:39.528298] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2485510 00:27:29.199 [2024-07-25 13:27:39.528308] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:29.199 [2024-07-25 13:27:39.528361] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24873a0 00:27:29.199 [2024-07-25 13:27:39.528454] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2485510 00:27:29.199 [2024-07-25 13:27:39.528463] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2485510 00:27:29.199 [2024-07-25 13:27:39.528525] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.199 pt2 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.199 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.458 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:29.458 "name": "raid_bdev1", 00:27:29.458 "uuid": "24838a5a-701a-4839-9fdc-83297c37dcb4", 00:27:29.458 "strip_size_kb": 0, 00:27:29.458 "state": "online", 00:27:29.458 "raid_level": "raid1", 00:27:29.458 "superblock": true, 00:27:29.458 "num_base_bdevs": 2, 00:27:29.458 "num_base_bdevs_discovered": 2, 00:27:29.458 "num_base_bdevs_operational": 2, 00:27:29.458 "base_bdevs_list": [ 00:27:29.458 { 00:27:29.458 "name": "pt1", 00:27:29.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:29.458 "is_configured": true, 00:27:29.458 "data_offset": 256, 00:27:29.458 "data_size": 7936 00:27:29.458 }, 00:27:29.458 { 00:27:29.458 "name": "pt2", 00:27:29.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:29.458 "is_configured": true, 00:27:29.458 "data_offset": 256, 00:27:29.458 "data_size": 7936 00:27:29.458 } 00:27:29.458 ] 00:27:29.458 }' 00:27:29.458 13:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:29.458 13:27:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:30.023 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:27:30.023 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:30.023 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:30.023 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:30.023 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:30.023 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:30.023 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:30.023 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:30.281 [2024-07-25 13:27:40.542811] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:30.281 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:30.281 "name": "raid_bdev1", 00:27:30.281 "aliases": [ 00:27:30.281 "24838a5a-701a-4839-9fdc-83297c37dcb4" 00:27:30.281 ], 00:27:30.281 "product_name": "Raid Volume", 00:27:30.281 "block_size": 4096, 00:27:30.281 "num_blocks": 7936, 00:27:30.281 "uuid": "24838a5a-701a-4839-9fdc-83297c37dcb4", 00:27:30.281 "md_size": 32, 00:27:30.281 "md_interleave": false, 00:27:30.281 "dif_type": 0, 00:27:30.281 "assigned_rate_limits": { 00:27:30.281 "rw_ios_per_sec": 0, 00:27:30.281 "rw_mbytes_per_sec": 0, 00:27:30.281 "r_mbytes_per_sec": 0, 00:27:30.281 "w_mbytes_per_sec": 0 00:27:30.281 }, 00:27:30.281 "claimed": false, 00:27:30.281 "zoned": false, 00:27:30.281 "supported_io_types": { 00:27:30.281 "read": true, 00:27:30.281 "write": true, 00:27:30.281 "unmap": false, 00:27:30.281 "flush": false, 00:27:30.281 "reset": true, 00:27:30.281 "nvme_admin": false, 00:27:30.281 "nvme_io": false, 00:27:30.281 "nvme_io_md": false, 00:27:30.281 "write_zeroes": true, 00:27:30.281 "zcopy": false, 00:27:30.281 "get_zone_info": false, 00:27:30.281 "zone_management": false, 00:27:30.281 "zone_append": false, 00:27:30.281 "compare": false, 00:27:30.281 "compare_and_write": false, 00:27:30.281 "abort": false, 00:27:30.281 "seek_hole": false, 00:27:30.281 "seek_data": false, 00:27:30.281 "copy": false, 00:27:30.281 "nvme_iov_md": false 00:27:30.281 }, 00:27:30.281 "memory_domains": [ 00:27:30.281 { 00:27:30.281 "dma_device_id": "system", 00:27:30.281 "dma_device_type": 1 00:27:30.281 }, 00:27:30.281 { 00:27:30.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.281 "dma_device_type": 2 00:27:30.281 }, 00:27:30.281 { 00:27:30.281 "dma_device_id": "system", 00:27:30.281 "dma_device_type": 1 00:27:30.281 }, 00:27:30.281 { 00:27:30.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.281 "dma_device_type": 2 00:27:30.282 } 00:27:30.282 ], 00:27:30.282 "driver_specific": { 00:27:30.282 "raid": { 00:27:30.282 "uuid": "24838a5a-701a-4839-9fdc-83297c37dcb4", 00:27:30.282 "strip_size_kb": 0, 00:27:30.282 "state": "online", 00:27:30.282 "raid_level": "raid1", 00:27:30.282 "superblock": true, 00:27:30.282 "num_base_bdevs": 2, 00:27:30.282 "num_base_bdevs_discovered": 2, 00:27:30.282 "num_base_bdevs_operational": 2, 00:27:30.282 "base_bdevs_list": [ 00:27:30.282 { 00:27:30.282 "name": "pt1", 00:27:30.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:30.282 "is_configured": true, 00:27:30.282 "data_offset": 256, 00:27:30.282 "data_size": 7936 00:27:30.282 }, 00:27:30.282 { 00:27:30.282 "name": "pt2", 00:27:30.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:30.282 "is_configured": true, 00:27:30.282 "data_offset": 256, 00:27:30.282 "data_size": 7936 00:27:30.282 } 00:27:30.282 ] 00:27:30.282 } 00:27:30.282 } 00:27:30.282 }' 00:27:30.282 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:30.282 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:30.282 pt2' 00:27:30.282 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:30.282 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:30.282 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:30.542 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:30.542 "name": "pt1", 00:27:30.542 "aliases": [ 00:27:30.542 "00000000-0000-0000-0000-000000000001" 00:27:30.542 ], 00:27:30.542 "product_name": "passthru", 00:27:30.542 "block_size": 4096, 00:27:30.542 "num_blocks": 8192, 00:27:30.542 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:30.542 "md_size": 32, 00:27:30.542 "md_interleave": false, 00:27:30.542 "dif_type": 0, 00:27:30.542 "assigned_rate_limits": { 00:27:30.542 "rw_ios_per_sec": 0, 00:27:30.542 "rw_mbytes_per_sec": 0, 00:27:30.542 "r_mbytes_per_sec": 0, 00:27:30.542 "w_mbytes_per_sec": 0 00:27:30.542 }, 00:27:30.542 "claimed": true, 00:27:30.542 "claim_type": "exclusive_write", 00:27:30.542 "zoned": false, 00:27:30.542 "supported_io_types": { 00:27:30.542 "read": true, 00:27:30.542 "write": true, 00:27:30.542 "unmap": true, 00:27:30.542 "flush": true, 00:27:30.542 "reset": true, 00:27:30.542 "nvme_admin": false, 00:27:30.542 "nvme_io": false, 00:27:30.542 "nvme_io_md": false, 00:27:30.542 "write_zeroes": true, 00:27:30.542 "zcopy": true, 00:27:30.542 "get_zone_info": false, 00:27:30.542 "zone_management": false, 00:27:30.542 "zone_append": false, 00:27:30.542 "compare": false, 00:27:30.542 "compare_and_write": false, 00:27:30.542 "abort": true, 00:27:30.542 "seek_hole": false, 00:27:30.542 "seek_data": false, 00:27:30.542 "copy": true, 00:27:30.542 "nvme_iov_md": false 00:27:30.542 }, 00:27:30.542 "memory_domains": [ 00:27:30.542 { 00:27:30.542 "dma_device_id": "system", 00:27:30.542 "dma_device_type": 1 00:27:30.542 }, 00:27:30.542 { 00:27:30.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.542 "dma_device_type": 2 00:27:30.542 } 00:27:30.542 ], 00:27:30.542 "driver_specific": { 00:27:30.542 "passthru": { 00:27:30.542 "name": "pt1", 00:27:30.542 "base_bdev_name": "malloc1" 00:27:30.542 } 00:27:30.542 } 00:27:30.542 }' 00:27:30.542 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:30.542 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:30.542 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:30.542 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:30.542 13:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:30.542 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:30.542 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:30.812 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:30.812 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:30.812 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:30.812 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:30.812 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:30.812 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:30.812 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:30.812 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:31.071 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:31.071 "name": "pt2", 00:27:31.071 "aliases": [ 00:27:31.071 "00000000-0000-0000-0000-000000000002" 00:27:31.071 ], 00:27:31.071 "product_name": "passthru", 00:27:31.071 "block_size": 4096, 00:27:31.071 "num_blocks": 8192, 00:27:31.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:31.071 "md_size": 32, 00:27:31.071 "md_interleave": false, 00:27:31.071 "dif_type": 0, 00:27:31.071 "assigned_rate_limits": { 00:27:31.071 "rw_ios_per_sec": 0, 00:27:31.071 "rw_mbytes_per_sec": 0, 00:27:31.071 "r_mbytes_per_sec": 0, 00:27:31.071 "w_mbytes_per_sec": 0 00:27:31.071 }, 00:27:31.071 "claimed": true, 00:27:31.071 "claim_type": "exclusive_write", 00:27:31.071 "zoned": false, 00:27:31.071 "supported_io_types": { 00:27:31.071 "read": true, 00:27:31.071 "write": true, 00:27:31.071 "unmap": true, 00:27:31.071 "flush": true, 00:27:31.071 "reset": true, 00:27:31.071 "nvme_admin": false, 00:27:31.071 "nvme_io": false, 00:27:31.071 "nvme_io_md": false, 00:27:31.071 "write_zeroes": true, 00:27:31.071 "zcopy": true, 00:27:31.071 "get_zone_info": false, 00:27:31.071 "zone_management": false, 00:27:31.071 "zone_append": false, 00:27:31.071 "compare": false, 00:27:31.071 "compare_and_write": false, 00:27:31.071 "abort": true, 00:27:31.071 "seek_hole": false, 00:27:31.071 "seek_data": false, 00:27:31.071 "copy": true, 00:27:31.071 "nvme_iov_md": false 00:27:31.071 }, 00:27:31.071 "memory_domains": [ 00:27:31.071 { 00:27:31.071 "dma_device_id": "system", 00:27:31.071 "dma_device_type": 1 00:27:31.071 }, 00:27:31.071 { 00:27:31.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:31.071 "dma_device_type": 2 00:27:31.071 } 00:27:31.071 ], 00:27:31.071 "driver_specific": { 00:27:31.071 "passthru": { 00:27:31.071 "name": "pt2", 00:27:31.071 "base_bdev_name": "malloc2" 00:27:31.071 } 00:27:31.071 } 00:27:31.071 }' 00:27:31.071 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:31.071 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:31.071 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:31.071 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:31.071 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:31.330 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:31.330 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:31.330 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:31.330 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:31.330 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:31.330 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:31.330 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:31.331 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:31.331 13:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:27:31.590 [2024-07-25 13:27:41.982593] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:31.590 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # '[' 24838a5a-701a-4839-9fdc-83297c37dcb4 '!=' 24838a5a-701a-4839-9fdc-83297c37dcb4 ']' 00:27:31.590 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:27:31.590 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:31.590 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:27:31.590 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:31.849 [2024-07-25 13:27:42.210980] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:31.849 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:31.849 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:31.849 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:31.849 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:31.849 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:31.849 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:31.849 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:31.849 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:31.849 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:31.849 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:31.849 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.849 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.109 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:32.109 "name": "raid_bdev1", 00:27:32.109 "uuid": "24838a5a-701a-4839-9fdc-83297c37dcb4", 00:27:32.109 "strip_size_kb": 0, 00:27:32.109 "state": "online", 00:27:32.109 "raid_level": "raid1", 00:27:32.109 "superblock": true, 00:27:32.109 "num_base_bdevs": 2, 00:27:32.109 "num_base_bdevs_discovered": 1, 00:27:32.109 "num_base_bdevs_operational": 1, 00:27:32.109 "base_bdevs_list": [ 00:27:32.109 { 00:27:32.109 "name": null, 00:27:32.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.109 "is_configured": false, 00:27:32.109 "data_offset": 256, 00:27:32.109 "data_size": 7936 00:27:32.109 }, 00:27:32.109 { 00:27:32.109 "name": "pt2", 00:27:32.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:32.109 "is_configured": true, 00:27:32.109 "data_offset": 256, 00:27:32.109 "data_size": 7936 00:27:32.109 } 00:27:32.109 ] 00:27:32.109 }' 00:27:32.109 13:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:32.109 13:27:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:32.677 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:32.936 [2024-07-25 13:27:43.233715] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:32.936 [2024-07-25 13:27:43.233736] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:32.936 [2024-07-25 13:27:43.233781] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:32.936 [2024-07-25 13:27:43.233821] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:32.936 [2024-07-25 13:27:43.233832] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2485510 name raid_bdev1, state offline 00:27:32.936 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.936 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:27:33.195 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:27:33.195 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:27:33.195 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:27:33.195 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:27:33.195 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@534 -- # i=1 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:33.454 [2024-07-25 13:27:43.915478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:33.454 [2024-07-25 13:27:43.915520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.454 [2024-07-25 13:27:43.915536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22ee810 00:27:33.454 [2024-07-25 13:27:43.915547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.454 [2024-07-25 13:27:43.916882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.454 [2024-07-25 13:27:43.916909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:33.454 [2024-07-25 13:27:43.916950] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:33.454 [2024-07-25 13:27:43.916973] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:33.454 [2024-07-25 13:27:43.917039] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2485690 00:27:33.454 [2024-07-25 13:27:43.917048] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:33.454 [2024-07-25 13:27:43.917101] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24854f0 00:27:33.454 [2024-07-25 13:27:43.917195] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2485690 00:27:33.454 [2024-07-25 13:27:43.917204] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2485690 00:27:33.454 [2024-07-25 13:27:43.917265] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:33.454 pt2 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.454 13:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.713 13:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:33.713 "name": "raid_bdev1", 00:27:33.713 "uuid": "24838a5a-701a-4839-9fdc-83297c37dcb4", 00:27:33.713 "strip_size_kb": 0, 00:27:33.713 "state": "online", 00:27:33.713 "raid_level": "raid1", 00:27:33.713 "superblock": true, 00:27:33.713 "num_base_bdevs": 2, 00:27:33.713 "num_base_bdevs_discovered": 1, 00:27:33.713 "num_base_bdevs_operational": 1, 00:27:33.713 "base_bdevs_list": [ 00:27:33.713 { 00:27:33.713 "name": null, 00:27:33.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.713 "is_configured": false, 00:27:33.713 "data_offset": 256, 00:27:33.713 "data_size": 7936 00:27:33.713 }, 00:27:33.713 { 00:27:33.713 "name": "pt2", 00:27:33.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:33.713 "is_configured": true, 00:27:33.713 "data_offset": 256, 00:27:33.713 "data_size": 7936 00:27:33.713 } 00:27:33.713 ] 00:27:33.713 }' 00:27:33.713 13:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:33.713 13:27:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:34.280 13:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:34.539 [2024-07-25 13:27:44.930155] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:34.539 [2024-07-25 13:27:44.930178] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:34.539 [2024-07-25 13:27:44.930224] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:34.539 [2024-07-25 13:27:44.930263] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:34.539 [2024-07-25 13:27:44.930273] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2485690 name raid_bdev1, state offline 00:27:34.539 13:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.539 13:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:27:34.798 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:27:34.798 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:27:34.798 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:27:34.798 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:35.057 [2024-07-25 13:27:45.383322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:35.057 [2024-07-25 13:27:45.383357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.057 [2024-07-25 13:27:45.383373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2486e10 00:27:35.057 [2024-07-25 13:27:45.383384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.057 [2024-07-25 13:27:45.384724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.057 [2024-07-25 13:27:45.384749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:35.057 [2024-07-25 13:27:45.384789] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:35.057 [2024-07-25 13:27:45.384811] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:35.057 [2024-07-25 13:27:45.384893] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:35.057 [2024-07-25 13:27:45.384905] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:35.057 [2024-07-25 13:27:45.384918] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22eddd0 name raid_bdev1, state configuring 00:27:35.057 [2024-07-25 13:27:45.384939] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:35.057 [2024-07-25 13:27:45.384984] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2482df0 00:27:35.057 [2024-07-25 13:27:45.384993] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:35.057 [2024-07-25 13:27:45.385050] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x248a4f0 00:27:35.057 [2024-07-25 13:27:45.385147] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2482df0 00:27:35.057 [2024-07-25 13:27:45.385157] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2482df0 00:27:35.057 [2024-07-25 13:27:45.385228] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.057 pt1 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.057 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.316 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:35.316 "name": "raid_bdev1", 00:27:35.316 "uuid": "24838a5a-701a-4839-9fdc-83297c37dcb4", 00:27:35.316 "strip_size_kb": 0, 00:27:35.316 "state": "online", 00:27:35.316 "raid_level": "raid1", 00:27:35.316 "superblock": true, 00:27:35.316 "num_base_bdevs": 2, 00:27:35.316 "num_base_bdevs_discovered": 1, 00:27:35.316 "num_base_bdevs_operational": 1, 00:27:35.316 "base_bdevs_list": [ 00:27:35.316 { 00:27:35.316 "name": null, 00:27:35.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.316 "is_configured": false, 00:27:35.316 "data_offset": 256, 00:27:35.316 "data_size": 7936 00:27:35.316 }, 00:27:35.316 { 00:27:35.316 "name": "pt2", 00:27:35.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:35.316 "is_configured": true, 00:27:35.316 "data_offset": 256, 00:27:35.316 "data_size": 7936 00:27:35.316 } 00:27:35.316 ] 00:27:35.316 }' 00:27:35.316 13:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:35.316 13:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:35.948 13:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:35.948 13:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:35.948 13:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:27:35.948 13:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:35.948 13:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:27:36.206 [2024-07-25 13:27:46.638867] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:36.206 13:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # '[' 24838a5a-701a-4839-9fdc-83297c37dcb4 '!=' 24838a5a-701a-4839-9fdc-83297c37dcb4 ']' 00:27:36.206 13:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@578 -- # killprocess 1006904 00:27:36.206 13:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 1006904 ']' 00:27:36.206 13:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 1006904 00:27:36.206 13:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:27:36.206 13:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:36.206 13:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1006904 00:27:36.465 13:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:36.465 13:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:36.465 13:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1006904' 00:27:36.465 killing process with pid 1006904 00:27:36.465 13:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 1006904 00:27:36.465 [2024-07-25 13:27:46.718057] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:36.465 [2024-07-25 13:27:46.718102] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:36.465 [2024-07-25 13:27:46.718147] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:36.465 [2024-07-25 13:27:46.718158] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2482df0 name raid_bdev1, state offline 00:27:36.465 13:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 1006904 00:27:36.466 [2024-07-25 13:27:46.736646] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:36.466 13:27:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@580 -- # return 0 00:27:36.466 00:27:36.466 real 0m14.629s 00:27:36.466 user 0m26.468s 00:27:36.466 sys 0m2.738s 00:27:36.466 13:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:36.466 13:27:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:36.466 ************************************ 00:27:36.466 END TEST raid_superblock_test_md_separate 00:27:36.466 ************************************ 00:27:36.726 13:27:46 bdev_raid -- bdev/bdev_raid.sh@987 -- # '[' true = true ']' 00:27:36.726 13:27:46 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:27:36.726 13:27:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:36.726 13:27:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:36.726 13:27:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:36.726 ************************************ 00:27:36.726 START TEST raid_rebuild_test_sb_md_separate 00:27:36.726 ************************************ 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # local verify=true 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # local strip_size 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # local create_arg 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@594 -- # local data_offset 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # raid_pid=1009603 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # waitforlisten 1009603 /var/tmp/spdk-raid.sock 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 1009603 ']' 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:36.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:36.726 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:36.726 [2024-07-25 13:27:47.068276] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:27:36.726 [2024-07-25 13:27:47.068332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009603 ] 00:27:36.726 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:36.726 Zero copy mechanism will not be used. 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:36.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.726 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:36.726 [2024-07-25 13:27:47.198574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.986 [2024-07-25 13:27:47.285133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.986 [2024-07-25 13:27:47.346148] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:36.986 [2024-07-25 13:27:47.346185] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:37.554 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.554 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:27:37.554 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:37.554 13:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:27:37.813 BaseBdev1_malloc 00:27:37.813 13:27:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:38.072 [2024-07-25 13:27:48.419367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:38.072 [2024-07-25 13:27:48.419409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.072 [2024-07-25 13:27:48.419429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f8f000 00:27:38.072 [2024-07-25 13:27:48.419445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.072 [2024-07-25 13:27:48.420845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.072 [2024-07-25 13:27:48.420872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:38.072 BaseBdev1 00:27:38.072 13:27:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:38.072 13:27:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:27:38.331 BaseBdev2_malloc 00:27:38.331 13:27:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:38.590 [2024-07-25 13:27:48.873705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:38.590 [2024-07-25 13:27:48.873744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.590 [2024-07-25 13:27:48.873762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a2270 00:27:38.590 [2024-07-25 13:27:48.873773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.590 [2024-07-25 13:27:48.874994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.590 [2024-07-25 13:27:48.875019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:38.590 BaseBdev2 00:27:38.590 13:27:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:27:38.849 spare_malloc 00:27:38.849 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:38.849 spare_delay 00:27:39.108 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:39.108 [2024-07-25 13:27:49.560364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:39.108 [2024-07-25 13:27:49.560406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.108 [2024-07-25 13:27:49.560427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a61a0 00:27:39.108 [2024-07-25 13:27:49.560438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.108 [2024-07-25 13:27:49.561698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.108 [2024-07-25 13:27:49.561724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:39.108 spare 00:27:39.108 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:39.367 [2024-07-25 13:27:49.784988] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:39.367 [2024-07-25 13:27:49.786149] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:39.367 [2024-07-25 13:27:49.786288] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x20a54d0 00:27:39.367 [2024-07-25 13:27:49.786300] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:39.367 [2024-07-25 13:27:49.786369] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a9250 00:27:39.367 [2024-07-25 13:27:49.786466] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20a54d0 00:27:39.367 [2024-07-25 13:27:49.786475] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20a54d0 00:27:39.367 [2024-07-25 13:27:49.786548] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:39.367 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:39.367 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:39.367 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:39.367 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:39.367 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:39.367 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:39.368 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:39.368 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:39.368 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:39.368 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:39.368 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.368 13:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.627 13:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:39.627 "name": "raid_bdev1", 00:27:39.627 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:39.627 "strip_size_kb": 0, 00:27:39.627 "state": "online", 00:27:39.627 "raid_level": "raid1", 00:27:39.627 "superblock": true, 00:27:39.627 "num_base_bdevs": 2, 00:27:39.627 "num_base_bdevs_discovered": 2, 00:27:39.627 "num_base_bdevs_operational": 2, 00:27:39.627 "base_bdevs_list": [ 00:27:39.627 { 00:27:39.627 "name": "BaseBdev1", 00:27:39.627 "uuid": "2da2ae1b-2d32-55be-b7a9-38b541d52c2e", 00:27:39.627 "is_configured": true, 00:27:39.627 "data_offset": 256, 00:27:39.627 "data_size": 7936 00:27:39.627 }, 00:27:39.627 { 00:27:39.627 "name": "BaseBdev2", 00:27:39.627 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:39.627 "is_configured": true, 00:27:39.627 "data_offset": 256, 00:27:39.627 "data_size": 7936 00:27:39.627 } 00:27:39.627 ] 00:27:39.627 }' 00:27:39.627 13:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:39.627 13:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.194 13:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:40.194 13:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:40.453 [2024-07-25 13:27:50.815899] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:40.453 13:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:27:40.453 13:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.453 13:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:41.020 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:41.279 [2024-07-25 13:27:51.565693] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a95c0 00:27:41.279 /dev/nbd0 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:41.279 1+0 records in 00:27:41.279 1+0 records out 00:27:41.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244446 s, 16.8 MB/s 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:27:41.279 13:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:27:42.216 7936+0 records in 00:27:42.216 7936+0 records out 00:27:42.216 32505856 bytes (33 MB, 31 MiB) copied, 0.725855 s, 44.8 MB/s 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:42.216 [2024-07-25 13:27:52.555324] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:42.216 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:42.474 [2024-07-25 13:27:52.767921] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:42.474 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:42.474 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:42.474 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:42.474 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:42.474 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:42.474 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:42.474 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:42.474 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:42.474 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:42.474 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:42.474 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.474 13:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.732 13:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:42.732 "name": "raid_bdev1", 00:27:42.732 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:42.732 "strip_size_kb": 0, 00:27:42.732 "state": "online", 00:27:42.732 "raid_level": "raid1", 00:27:42.732 "superblock": true, 00:27:42.732 "num_base_bdevs": 2, 00:27:42.732 "num_base_bdevs_discovered": 1, 00:27:42.732 "num_base_bdevs_operational": 1, 00:27:42.732 "base_bdevs_list": [ 00:27:42.732 { 00:27:42.732 "name": null, 00:27:42.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.732 "is_configured": false, 00:27:42.732 "data_offset": 256, 00:27:42.732 "data_size": 7936 00:27:42.732 }, 00:27:42.732 { 00:27:42.732 "name": "BaseBdev2", 00:27:42.732 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:42.732 "is_configured": true, 00:27:42.732 "data_offset": 256, 00:27:42.732 "data_size": 7936 00:27:42.732 } 00:27:42.732 ] 00:27:42.732 }' 00:27:42.732 13:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:42.732 13:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:43.299 13:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:43.299 [2024-07-25 13:27:53.778596] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:43.299 [2024-07-25 13:27:53.780765] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a7e20 00:27:43.299 [2024-07-25 13:27:53.782802] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:43.557 13:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:44.489 13:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:44.489 13:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:44.489 13:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:44.489 13:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:44.489 13:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:44.489 13:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.489 13:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.748 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:44.748 "name": "raid_bdev1", 00:27:44.748 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:44.748 "strip_size_kb": 0, 00:27:44.748 "state": "online", 00:27:44.748 "raid_level": "raid1", 00:27:44.748 "superblock": true, 00:27:44.748 "num_base_bdevs": 2, 00:27:44.748 "num_base_bdevs_discovered": 2, 00:27:44.748 "num_base_bdevs_operational": 2, 00:27:44.748 "process": { 00:27:44.748 "type": "rebuild", 00:27:44.748 "target": "spare", 00:27:44.748 "progress": { 00:27:44.748 "blocks": 3072, 00:27:44.748 "percent": 38 00:27:44.748 } 00:27:44.748 }, 00:27:44.748 "base_bdevs_list": [ 00:27:44.748 { 00:27:44.748 "name": "spare", 00:27:44.748 "uuid": "a3e237aa-a64d-5ab3-a486-de5d2630fdde", 00:27:44.748 "is_configured": true, 00:27:44.748 "data_offset": 256, 00:27:44.748 "data_size": 7936 00:27:44.748 }, 00:27:44.748 { 00:27:44.748 "name": "BaseBdev2", 00:27:44.748 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:44.748 "is_configured": true, 00:27:44.748 "data_offset": 256, 00:27:44.748 "data_size": 7936 00:27:44.748 } 00:27:44.748 ] 00:27:44.748 }' 00:27:44.748 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:44.748 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:44.748 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.748 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:44.748 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:45.006 [2024-07-25 13:27:55.319768] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:45.006 [2024-07-25 13:27:55.394555] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:45.006 [2024-07-25 13:27:55.394596] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:45.006 [2024-07-25 13:27:55.394610] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:45.006 [2024-07-25 13:27:55.394618] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:45.007 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:45.007 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:45.007 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:45.007 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:45.007 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:45.007 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:45.007 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:45.007 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:45.007 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:45.007 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:45.007 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.007 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.295 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:45.295 "name": "raid_bdev1", 00:27:45.295 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:45.295 "strip_size_kb": 0, 00:27:45.295 "state": "online", 00:27:45.295 "raid_level": "raid1", 00:27:45.295 "superblock": true, 00:27:45.295 "num_base_bdevs": 2, 00:27:45.295 "num_base_bdevs_discovered": 1, 00:27:45.295 "num_base_bdevs_operational": 1, 00:27:45.295 "base_bdevs_list": [ 00:27:45.295 { 00:27:45.295 "name": null, 00:27:45.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.295 "is_configured": false, 00:27:45.295 "data_offset": 256, 00:27:45.295 "data_size": 7936 00:27:45.295 }, 00:27:45.295 { 00:27:45.295 "name": "BaseBdev2", 00:27:45.295 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:45.295 "is_configured": true, 00:27:45.295 "data_offset": 256, 00:27:45.295 "data_size": 7936 00:27:45.295 } 00:27:45.295 ] 00:27:45.295 }' 00:27:45.295 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:45.295 13:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.867 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:45.867 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.867 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:45.867 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:45.867 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.868 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.868 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.125 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:46.125 "name": "raid_bdev1", 00:27:46.125 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:46.125 "strip_size_kb": 0, 00:27:46.125 "state": "online", 00:27:46.125 "raid_level": "raid1", 00:27:46.125 "superblock": true, 00:27:46.125 "num_base_bdevs": 2, 00:27:46.125 "num_base_bdevs_discovered": 1, 00:27:46.125 "num_base_bdevs_operational": 1, 00:27:46.125 "base_bdevs_list": [ 00:27:46.125 { 00:27:46.125 "name": null, 00:27:46.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.126 "is_configured": false, 00:27:46.126 "data_offset": 256, 00:27:46.126 "data_size": 7936 00:27:46.126 }, 00:27:46.126 { 00:27:46.126 "name": "BaseBdev2", 00:27:46.126 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:46.126 "is_configured": true, 00:27:46.126 "data_offset": 256, 00:27:46.126 "data_size": 7936 00:27:46.126 } 00:27:46.126 ] 00:27:46.126 }' 00:27:46.126 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:46.126 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:46.126 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:46.126 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:46.126 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:46.382 [2024-07-25 13:27:56.753093] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:46.382 [2024-07-25 13:27:56.755271] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a90d0 00:27:46.382 [2024-07-25 13:27:56.756617] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:46.382 13:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:47.311 13:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.311 13:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.311 13:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.311 13:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.311 13:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.311 13:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.311 13:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.569 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.569 "name": "raid_bdev1", 00:27:47.569 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:47.569 "strip_size_kb": 0, 00:27:47.569 "state": "online", 00:27:47.569 "raid_level": "raid1", 00:27:47.569 "superblock": true, 00:27:47.569 "num_base_bdevs": 2, 00:27:47.569 "num_base_bdevs_discovered": 2, 00:27:47.569 "num_base_bdevs_operational": 2, 00:27:47.569 "process": { 00:27:47.569 "type": "rebuild", 00:27:47.569 "target": "spare", 00:27:47.569 "progress": { 00:27:47.569 "blocks": 3072, 00:27:47.569 "percent": 38 00:27:47.569 } 00:27:47.569 }, 00:27:47.569 "base_bdevs_list": [ 00:27:47.569 { 00:27:47.569 "name": "spare", 00:27:47.569 "uuid": "a3e237aa-a64d-5ab3-a486-de5d2630fdde", 00:27:47.569 "is_configured": true, 00:27:47.569 "data_offset": 256, 00:27:47.569 "data_size": 7936 00:27:47.569 }, 00:27:47.569 { 00:27:47.569 "name": "BaseBdev2", 00:27:47.569 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:47.569 "is_configured": true, 00:27:47.569 "data_offset": 256, 00:27:47.569 "data_size": 7936 00:27:47.569 } 00:27:47.569 ] 00:27:47.569 }' 00:27:47.569 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:47.569 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:47.569 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:27:47.827 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # local timeout=1033 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.827 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.085 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:48.085 "name": "raid_bdev1", 00:27:48.085 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:48.085 "strip_size_kb": 0, 00:27:48.085 "state": "online", 00:27:48.085 "raid_level": "raid1", 00:27:48.085 "superblock": true, 00:27:48.085 "num_base_bdevs": 2, 00:27:48.085 "num_base_bdevs_discovered": 2, 00:27:48.085 "num_base_bdevs_operational": 2, 00:27:48.085 "process": { 00:27:48.085 "type": "rebuild", 00:27:48.085 "target": "spare", 00:27:48.085 "progress": { 00:27:48.085 "blocks": 3840, 00:27:48.085 "percent": 48 00:27:48.085 } 00:27:48.085 }, 00:27:48.085 "base_bdevs_list": [ 00:27:48.085 { 00:27:48.085 "name": "spare", 00:27:48.085 "uuid": "a3e237aa-a64d-5ab3-a486-de5d2630fdde", 00:27:48.085 "is_configured": true, 00:27:48.085 "data_offset": 256, 00:27:48.085 "data_size": 7936 00:27:48.085 }, 00:27:48.085 { 00:27:48.085 "name": "BaseBdev2", 00:27:48.085 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:48.085 "is_configured": true, 00:27:48.085 "data_offset": 256, 00:27:48.085 "data_size": 7936 00:27:48.085 } 00:27:48.085 ] 00:27:48.085 }' 00:27:48.085 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:48.085 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:48.085 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:48.085 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:48.085 13:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:49.020 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:49.020 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:49.020 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:49.021 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:49.021 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:49.021 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:49.021 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.021 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.278 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.278 "name": "raid_bdev1", 00:27:49.278 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:49.278 "strip_size_kb": 0, 00:27:49.278 "state": "online", 00:27:49.278 "raid_level": "raid1", 00:27:49.278 "superblock": true, 00:27:49.278 "num_base_bdevs": 2, 00:27:49.278 "num_base_bdevs_discovered": 2, 00:27:49.278 "num_base_bdevs_operational": 2, 00:27:49.278 "process": { 00:27:49.278 "type": "rebuild", 00:27:49.278 "target": "spare", 00:27:49.278 "progress": { 00:27:49.278 "blocks": 7168, 00:27:49.278 "percent": 90 00:27:49.278 } 00:27:49.278 }, 00:27:49.278 "base_bdevs_list": [ 00:27:49.278 { 00:27:49.278 "name": "spare", 00:27:49.278 "uuid": "a3e237aa-a64d-5ab3-a486-de5d2630fdde", 00:27:49.278 "is_configured": true, 00:27:49.278 "data_offset": 256, 00:27:49.278 "data_size": 7936 00:27:49.278 }, 00:27:49.278 { 00:27:49.278 "name": "BaseBdev2", 00:27:49.278 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:49.278 "is_configured": true, 00:27:49.278 "data_offset": 256, 00:27:49.278 "data_size": 7936 00:27:49.278 } 00:27:49.278 ] 00:27:49.278 }' 00:27:49.278 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:49.278 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:49.278 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:49.279 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:49.279 13:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:49.535 [2024-07-25 13:27:59.879302] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:49.535 [2024-07-25 13:27:59.879359] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:49.535 [2024-07-25 13:27:59.879432] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:50.468 13:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:50.468 13:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:50.468 13:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:50.468 13:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:50.468 13:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:50.468 13:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:50.468 13:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.468 13:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.725 13:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:50.725 "name": "raid_bdev1", 00:27:50.725 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:50.725 "strip_size_kb": 0, 00:27:50.725 "state": "online", 00:27:50.725 "raid_level": "raid1", 00:27:50.725 "superblock": true, 00:27:50.725 "num_base_bdevs": 2, 00:27:50.725 "num_base_bdevs_discovered": 2, 00:27:50.725 "num_base_bdevs_operational": 2, 00:27:50.725 "base_bdevs_list": [ 00:27:50.725 { 00:27:50.725 "name": "spare", 00:27:50.725 "uuid": "a3e237aa-a64d-5ab3-a486-de5d2630fdde", 00:27:50.725 "is_configured": true, 00:27:50.725 "data_offset": 256, 00:27:50.725 "data_size": 7936 00:27:50.725 }, 00:27:50.725 { 00:27:50.725 "name": "BaseBdev2", 00:27:50.725 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:50.725 "is_configured": true, 00:27:50.725 "data_offset": 256, 00:27:50.725 "data_size": 7936 00:27:50.725 } 00:27:50.725 ] 00:27:50.725 }' 00:27:50.725 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:50.725 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:50.725 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:50.725 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:50.725 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@724 -- # break 00:27:50.725 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:50.725 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:50.725 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:50.725 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:50.725 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:50.725 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.725 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:50.981 "name": "raid_bdev1", 00:27:50.981 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:50.981 "strip_size_kb": 0, 00:27:50.981 "state": "online", 00:27:50.981 "raid_level": "raid1", 00:27:50.981 "superblock": true, 00:27:50.981 "num_base_bdevs": 2, 00:27:50.981 "num_base_bdevs_discovered": 2, 00:27:50.981 "num_base_bdevs_operational": 2, 00:27:50.981 "base_bdevs_list": [ 00:27:50.981 { 00:27:50.981 "name": "spare", 00:27:50.981 "uuid": "a3e237aa-a64d-5ab3-a486-de5d2630fdde", 00:27:50.981 "is_configured": true, 00:27:50.981 "data_offset": 256, 00:27:50.981 "data_size": 7936 00:27:50.981 }, 00:27:50.981 { 00:27:50.981 "name": "BaseBdev2", 00:27:50.981 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:50.981 "is_configured": true, 00:27:50.981 "data_offset": 256, 00:27:50.981 "data_size": 7936 00:27:50.981 } 00:27:50.981 ] 00:27:50.981 }' 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.981 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.237 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:51.237 "name": "raid_bdev1", 00:27:51.237 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:51.237 "strip_size_kb": 0, 00:27:51.237 "state": "online", 00:27:51.237 "raid_level": "raid1", 00:27:51.237 "superblock": true, 00:27:51.237 "num_base_bdevs": 2, 00:27:51.237 "num_base_bdevs_discovered": 2, 00:27:51.237 "num_base_bdevs_operational": 2, 00:27:51.237 "base_bdevs_list": [ 00:27:51.237 { 00:27:51.237 "name": "spare", 00:27:51.237 "uuid": "a3e237aa-a64d-5ab3-a486-de5d2630fdde", 00:27:51.237 "is_configured": true, 00:27:51.238 "data_offset": 256, 00:27:51.238 "data_size": 7936 00:27:51.238 }, 00:27:51.238 { 00:27:51.238 "name": "BaseBdev2", 00:27:51.238 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:51.238 "is_configured": true, 00:27:51.238 "data_offset": 256, 00:27:51.238 "data_size": 7936 00:27:51.238 } 00:27:51.238 ] 00:27:51.238 }' 00:27:51.238 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:51.238 13:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.802 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:52.060 [2024-07-25 13:28:02.430225] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:52.060 [2024-07-25 13:28:02.430248] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:52.060 [2024-07-25 13:28:02.430298] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:52.060 [2024-07-25 13:28:02.430354] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:52.060 [2024-07-25 13:28:02.430365] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20a54d0 name raid_bdev1, state offline 00:27:52.060 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.060 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # jq length 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:52.318 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:52.574 /dev/nbd0 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:52.574 1+0 records in 00:27:52.574 1+0 records out 00:27:52.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022244 s, 18.4 MB/s 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:52.574 13:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:52.832 /dev/nbd1 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:52.832 1+0 records in 00:27:52.832 1+0 records out 00:27:52.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304553 s, 13.4 MB/s 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:52.832 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:53.089 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:53.089 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:53.089 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:53.089 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:53.089 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:53.089 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:53.089 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:53.089 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:53.089 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:53.089 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:53.346 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:53.346 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:53.346 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:53.346 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:53.346 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:53.346 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:53.346 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:53.346 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:53.346 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:27:53.346 13:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:53.604 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:53.862 [2024-07-25 13:28:04.237652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:53.862 [2024-07-25 13:28:04.237687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.862 [2024-07-25 13:28:04.237704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a87e0 00:27:53.862 [2024-07-25 13:28:04.237716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.862 [2024-07-25 13:28:04.239064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.862 [2024-07-25 13:28:04.239091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:53.862 [2024-07-25 13:28:04.239152] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:53.862 [2024-07-25 13:28:04.239178] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:53.862 [2024-07-25 13:28:04.239263] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:53.862 spare 00:27:53.862 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:53.862 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:53.862 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:53.862 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:53.862 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:53.862 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:53.862 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:53.862 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:53.862 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:53.862 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:53.862 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.862 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.862 [2024-07-25 13:28:04.339567] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x20a7070 00:27:53.862 [2024-07-25 13:28:04.339583] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:53.862 [2024-07-25 13:28:04.339643] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a7360 00:27:53.862 [2024-07-25 13:28:04.339749] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20a7070 00:27:53.862 [2024-07-25 13:28:04.339759] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20a7070 00:27:53.862 [2024-07-25 13:28:04.339828] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:54.120 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:54.120 "name": "raid_bdev1", 00:27:54.120 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:54.120 "strip_size_kb": 0, 00:27:54.120 "state": "online", 00:27:54.120 "raid_level": "raid1", 00:27:54.120 "superblock": true, 00:27:54.120 "num_base_bdevs": 2, 00:27:54.120 "num_base_bdevs_discovered": 2, 00:27:54.120 "num_base_bdevs_operational": 2, 00:27:54.120 "base_bdevs_list": [ 00:27:54.120 { 00:27:54.120 "name": "spare", 00:27:54.120 "uuid": "a3e237aa-a64d-5ab3-a486-de5d2630fdde", 00:27:54.120 "is_configured": true, 00:27:54.120 "data_offset": 256, 00:27:54.120 "data_size": 7936 00:27:54.120 }, 00:27:54.120 { 00:27:54.120 "name": "BaseBdev2", 00:27:54.120 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:54.120 "is_configured": true, 00:27:54.120 "data_offset": 256, 00:27:54.120 "data_size": 7936 00:27:54.120 } 00:27:54.120 ] 00:27:54.120 }' 00:27:54.120 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:54.120 13:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:54.686 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:54.686 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:54.686 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:54.686 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:54.686 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:54.686 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.686 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.944 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:54.944 "name": "raid_bdev1", 00:27:54.944 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:54.944 "strip_size_kb": 0, 00:27:54.944 "state": "online", 00:27:54.944 "raid_level": "raid1", 00:27:54.944 "superblock": true, 00:27:54.944 "num_base_bdevs": 2, 00:27:54.944 "num_base_bdevs_discovered": 2, 00:27:54.944 "num_base_bdevs_operational": 2, 00:27:54.944 "base_bdevs_list": [ 00:27:54.944 { 00:27:54.944 "name": "spare", 00:27:54.944 "uuid": "a3e237aa-a64d-5ab3-a486-de5d2630fdde", 00:27:54.944 "is_configured": true, 00:27:54.944 "data_offset": 256, 00:27:54.944 "data_size": 7936 00:27:54.944 }, 00:27:54.944 { 00:27:54.944 "name": "BaseBdev2", 00:27:54.944 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:54.944 "is_configured": true, 00:27:54.944 "data_offset": 256, 00:27:54.944 "data_size": 7936 00:27:54.944 } 00:27:54.944 ] 00:27:54.944 }' 00:27:54.944 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:54.944 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:54.944 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:54.944 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:54.944 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.944 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:55.202 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:27:55.202 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:55.460 [2024-07-25 13:28:05.785841] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:55.460 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:55.460 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:55.460 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:55.460 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:55.460 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:55.460 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:55.460 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:55.460 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:55.460 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:55.460 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:55.460 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.460 13:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.719 13:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:55.719 "name": "raid_bdev1", 00:27:55.719 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:55.719 "strip_size_kb": 0, 00:27:55.719 "state": "online", 00:27:55.719 "raid_level": "raid1", 00:27:55.719 "superblock": true, 00:27:55.719 "num_base_bdevs": 2, 00:27:55.719 "num_base_bdevs_discovered": 1, 00:27:55.719 "num_base_bdevs_operational": 1, 00:27:55.719 "base_bdevs_list": [ 00:27:55.719 { 00:27:55.719 "name": null, 00:27:55.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.719 "is_configured": false, 00:27:55.719 "data_offset": 256, 00:27:55.719 "data_size": 7936 00:27:55.719 }, 00:27:55.719 { 00:27:55.719 "name": "BaseBdev2", 00:27:55.719 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:55.719 "is_configured": true, 00:27:55.719 "data_offset": 256, 00:27:55.719 "data_size": 7936 00:27:55.719 } 00:27:55.719 ] 00:27:55.719 }' 00:27:55.719 13:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:55.719 13:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.285 13:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:56.542 [2024-07-25 13:28:06.804539] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:56.542 [2024-07-25 13:28:06.804669] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:56.542 [2024-07-25 13:28:06.804684] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:56.542 [2024-07-25 13:28:06.804709] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:56.542 [2024-07-25 13:28:06.806770] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a97b0 00:27:56.542 [2024-07-25 13:28:06.808903] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:56.542 13:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # sleep 1 00:27:57.475 13:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:57.475 13:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:57.475 13:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:57.475 13:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:57.475 13:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:57.475 13:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.475 13:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.733 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:57.733 "name": "raid_bdev1", 00:27:57.733 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:57.733 "strip_size_kb": 0, 00:27:57.733 "state": "online", 00:27:57.733 "raid_level": "raid1", 00:27:57.733 "superblock": true, 00:27:57.733 "num_base_bdevs": 2, 00:27:57.733 "num_base_bdevs_discovered": 2, 00:27:57.733 "num_base_bdevs_operational": 2, 00:27:57.733 "process": { 00:27:57.733 "type": "rebuild", 00:27:57.733 "target": "spare", 00:27:57.733 "progress": { 00:27:57.733 "blocks": 3072, 00:27:57.733 "percent": 38 00:27:57.733 } 00:27:57.733 }, 00:27:57.733 "base_bdevs_list": [ 00:27:57.733 { 00:27:57.733 "name": "spare", 00:27:57.733 "uuid": "a3e237aa-a64d-5ab3-a486-de5d2630fdde", 00:27:57.733 "is_configured": true, 00:27:57.733 "data_offset": 256, 00:27:57.733 "data_size": 7936 00:27:57.733 }, 00:27:57.733 { 00:27:57.733 "name": "BaseBdev2", 00:27:57.733 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:57.733 "is_configured": true, 00:27:57.733 "data_offset": 256, 00:27:57.733 "data_size": 7936 00:27:57.733 } 00:27:57.733 ] 00:27:57.733 }' 00:27:57.733 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:57.733 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:57.733 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:57.733 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:57.733 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:57.992 [2024-07-25 13:28:08.349577] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:57.992 [2024-07-25 13:28:08.420763] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:57.992 [2024-07-25 13:28:08.420803] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.992 [2024-07-25 13:28:08.420817] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:57.992 [2024-07-25 13:28:08.420824] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:57.992 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:57.992 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:57.992 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:57.992 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:57.992 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:57.992 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:57.992 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.992 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.992 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.992 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.992 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.992 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.333 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:58.333 "name": "raid_bdev1", 00:27:58.333 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:27:58.333 "strip_size_kb": 0, 00:27:58.333 "state": "online", 00:27:58.333 "raid_level": "raid1", 00:27:58.333 "superblock": true, 00:27:58.333 "num_base_bdevs": 2, 00:27:58.333 "num_base_bdevs_discovered": 1, 00:27:58.333 "num_base_bdevs_operational": 1, 00:27:58.333 "base_bdevs_list": [ 00:27:58.333 { 00:27:58.333 "name": null, 00:27:58.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.333 "is_configured": false, 00:27:58.333 "data_offset": 256, 00:27:58.333 "data_size": 7936 00:27:58.333 }, 00:27:58.333 { 00:27:58.333 "name": "BaseBdev2", 00:27:58.333 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:27:58.333 "is_configured": true, 00:27:58.333 "data_offset": 256, 00:27:58.333 "data_size": 7936 00:27:58.333 } 00:27:58.333 ] 00:27:58.333 }' 00:27:58.333 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:58.333 13:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:58.900 13:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:59.159 [2024-07-25 13:28:09.458530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:59.159 [2024-07-25 13:28:09.458577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:59.159 [2024-07-25 13:28:09.458597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2014330 00:27:59.159 [2024-07-25 13:28:09.458608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:59.159 [2024-07-25 13:28:09.458802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:59.159 [2024-07-25 13:28:09.458817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:59.159 [2024-07-25 13:28:09.458871] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:59.159 [2024-07-25 13:28:09.458881] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:59.159 [2024-07-25 13:28:09.458891] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:59.159 [2024-07-25 13:28:09.458908] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:59.159 [2024-07-25 13:28:09.461318] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a90d0 00:27:59.159 [2024-07-25 13:28:09.462678] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:59.159 spare 00:27:59.159 13:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # sleep 1 00:28:00.096 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:00.096 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:00.096 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:00.096 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:00.096 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:00.096 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.096 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.355 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:00.355 "name": "raid_bdev1", 00:28:00.355 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:28:00.355 "strip_size_kb": 0, 00:28:00.355 "state": "online", 00:28:00.355 "raid_level": "raid1", 00:28:00.355 "superblock": true, 00:28:00.355 "num_base_bdevs": 2, 00:28:00.355 "num_base_bdevs_discovered": 2, 00:28:00.355 "num_base_bdevs_operational": 2, 00:28:00.355 "process": { 00:28:00.355 "type": "rebuild", 00:28:00.355 "target": "spare", 00:28:00.355 "progress": { 00:28:00.355 "blocks": 3072, 00:28:00.355 "percent": 38 00:28:00.355 } 00:28:00.355 }, 00:28:00.355 "base_bdevs_list": [ 00:28:00.355 { 00:28:00.355 "name": "spare", 00:28:00.355 "uuid": "a3e237aa-a64d-5ab3-a486-de5d2630fdde", 00:28:00.355 "is_configured": true, 00:28:00.355 "data_offset": 256, 00:28:00.355 "data_size": 7936 00:28:00.355 }, 00:28:00.355 { 00:28:00.355 "name": "BaseBdev2", 00:28:00.355 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:28:00.355 "is_configured": true, 00:28:00.355 "data_offset": 256, 00:28:00.355 "data_size": 7936 00:28:00.355 } 00:28:00.355 ] 00:28:00.356 }' 00:28:00.356 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:00.356 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:00.356 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:00.356 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:00.356 13:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:00.613 [2024-07-25 13:28:11.003780] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:00.613 [2024-07-25 13:28:11.074489] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:00.613 [2024-07-25 13:28:11.074529] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:00.613 [2024-07-25 13:28:11.074542] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:00.613 [2024-07-25 13:28:11.074550] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:00.613 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:00.613 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:00.613 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:00.613 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:00.613 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:00.613 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:00.613 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:00.871 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:00.871 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:00.871 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:00.871 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.871 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.871 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:00.871 "name": "raid_bdev1", 00:28:00.871 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:28:00.871 "strip_size_kb": 0, 00:28:00.871 "state": "online", 00:28:00.871 "raid_level": "raid1", 00:28:00.871 "superblock": true, 00:28:00.871 "num_base_bdevs": 2, 00:28:00.871 "num_base_bdevs_discovered": 1, 00:28:00.871 "num_base_bdevs_operational": 1, 00:28:00.871 "base_bdevs_list": [ 00:28:00.871 { 00:28:00.871 "name": null, 00:28:00.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.871 "is_configured": false, 00:28:00.871 "data_offset": 256, 00:28:00.871 "data_size": 7936 00:28:00.871 }, 00:28:00.871 { 00:28:00.871 "name": "BaseBdev2", 00:28:00.871 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:28:00.871 "is_configured": true, 00:28:00.871 "data_offset": 256, 00:28:00.871 "data_size": 7936 00:28:00.871 } 00:28:00.871 ] 00:28:00.871 }' 00:28:00.871 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:00.871 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:01.438 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:01.438 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:01.438 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:01.438 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:01.438 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:01.438 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.438 13:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.696 13:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:01.696 "name": "raid_bdev1", 00:28:01.696 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:28:01.696 "strip_size_kb": 0, 00:28:01.696 "state": "online", 00:28:01.696 "raid_level": "raid1", 00:28:01.696 "superblock": true, 00:28:01.696 "num_base_bdevs": 2, 00:28:01.696 "num_base_bdevs_discovered": 1, 00:28:01.696 "num_base_bdevs_operational": 1, 00:28:01.696 "base_bdevs_list": [ 00:28:01.696 { 00:28:01.696 "name": null, 00:28:01.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.696 "is_configured": false, 00:28:01.696 "data_offset": 256, 00:28:01.697 "data_size": 7936 00:28:01.697 }, 00:28:01.697 { 00:28:01.697 "name": "BaseBdev2", 00:28:01.697 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:28:01.697 "is_configured": true, 00:28:01.697 "data_offset": 256, 00:28:01.697 "data_size": 7936 00:28:01.697 } 00:28:01.697 ] 00:28:01.697 }' 00:28:01.697 13:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:01.955 13:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:01.955 13:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:01.955 13:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:01.955 13:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:02.214 13:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:02.214 [2024-07-25 13:28:12.673607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:02.214 [2024-07-25 13:28:12.673650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:02.214 [2024-07-25 13:28:12.673667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f0e930 00:28:02.214 [2024-07-25 13:28:12.673679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:02.214 [2024-07-25 13:28:12.673853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:02.214 [2024-07-25 13:28:12.673867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:02.214 [2024-07-25 13:28:12.673909] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:02.215 [2024-07-25 13:28:12.673919] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:02.215 [2024-07-25 13:28:12.673928] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:02.215 BaseBdev1 00:28:02.215 13:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@789 -- # sleep 1 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.591 "name": "raid_bdev1", 00:28:03.591 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:28:03.591 "strip_size_kb": 0, 00:28:03.591 "state": "online", 00:28:03.591 "raid_level": "raid1", 00:28:03.591 "superblock": true, 00:28:03.591 "num_base_bdevs": 2, 00:28:03.591 "num_base_bdevs_discovered": 1, 00:28:03.591 "num_base_bdevs_operational": 1, 00:28:03.591 "base_bdevs_list": [ 00:28:03.591 { 00:28:03.591 "name": null, 00:28:03.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.591 "is_configured": false, 00:28:03.591 "data_offset": 256, 00:28:03.591 "data_size": 7936 00:28:03.591 }, 00:28:03.591 { 00:28:03.591 "name": "BaseBdev2", 00:28:03.591 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:28:03.591 "is_configured": true, 00:28:03.591 "data_offset": 256, 00:28:03.591 "data_size": 7936 00:28:03.591 } 00:28:03.591 ] 00:28:03.591 }' 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.591 13:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:04.158 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:04.158 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:04.158 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:04.158 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:04.158 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:04.158 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.158 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.417 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:04.417 "name": "raid_bdev1", 00:28:04.417 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:28:04.417 "strip_size_kb": 0, 00:28:04.418 "state": "online", 00:28:04.418 "raid_level": "raid1", 00:28:04.418 "superblock": true, 00:28:04.418 "num_base_bdevs": 2, 00:28:04.418 "num_base_bdevs_discovered": 1, 00:28:04.418 "num_base_bdevs_operational": 1, 00:28:04.418 "base_bdevs_list": [ 00:28:04.418 { 00:28:04.418 "name": null, 00:28:04.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.418 "is_configured": false, 00:28:04.418 "data_offset": 256, 00:28:04.418 "data_size": 7936 00:28:04.418 }, 00:28:04.418 { 00:28:04.418 "name": "BaseBdev2", 00:28:04.418 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:28:04.418 "is_configured": true, 00:28:04.418 "data_offset": 256, 00:28:04.418 "data_size": 7936 00:28:04.418 } 00:28:04.418 ] 00:28:04.418 }' 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:04.418 13:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:04.677 [2024-07-25 13:28:15.027811] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:04.677 [2024-07-25 13:28:15.027915] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:04.677 [2024-07-25 13:28:15.027929] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:04.677 request: 00:28:04.677 { 00:28:04.677 "base_bdev": "BaseBdev1", 00:28:04.677 "raid_bdev": "raid_bdev1", 00:28:04.677 "method": "bdev_raid_add_base_bdev", 00:28:04.677 "req_id": 1 00:28:04.677 } 00:28:04.677 Got JSON-RPC error response 00:28:04.677 response: 00:28:04.677 { 00:28:04.677 "code": -22, 00:28:04.677 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:04.677 } 00:28:04.677 13:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:28:04.677 13:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:04.677 13:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:04.677 13:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:04.677 13:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@793 -- # sleep 1 00:28:05.613 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:05.613 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:05.613 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:05.613 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:05.613 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:05.613 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:05.613 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:05.613 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:05.613 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:05.613 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:05.613 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.613 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.871 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.871 "name": "raid_bdev1", 00:28:05.871 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:28:05.871 "strip_size_kb": 0, 00:28:05.871 "state": "online", 00:28:05.871 "raid_level": "raid1", 00:28:05.871 "superblock": true, 00:28:05.871 "num_base_bdevs": 2, 00:28:05.871 "num_base_bdevs_discovered": 1, 00:28:05.871 "num_base_bdevs_operational": 1, 00:28:05.871 "base_bdevs_list": [ 00:28:05.871 { 00:28:05.871 "name": null, 00:28:05.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.871 "is_configured": false, 00:28:05.871 "data_offset": 256, 00:28:05.871 "data_size": 7936 00:28:05.871 }, 00:28:05.871 { 00:28:05.871 "name": "BaseBdev2", 00:28:05.871 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:28:05.871 "is_configured": true, 00:28:05.871 "data_offset": 256, 00:28:05.871 "data_size": 7936 00:28:05.871 } 00:28:05.871 ] 00:28:05.871 }' 00:28:05.871 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.871 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:06.438 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:06.438 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:06.438 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:06.438 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:06.438 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:06.438 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.438 13:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.697 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:06.697 "name": "raid_bdev1", 00:28:06.697 "uuid": "0fd1cd80-5dac-4ba8-b62a-2ad7f45dd3de", 00:28:06.697 "strip_size_kb": 0, 00:28:06.697 "state": "online", 00:28:06.697 "raid_level": "raid1", 00:28:06.697 "superblock": true, 00:28:06.697 "num_base_bdevs": 2, 00:28:06.697 "num_base_bdevs_discovered": 1, 00:28:06.697 "num_base_bdevs_operational": 1, 00:28:06.697 "base_bdevs_list": [ 00:28:06.697 { 00:28:06.697 "name": null, 00:28:06.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.697 "is_configured": false, 00:28:06.697 "data_offset": 256, 00:28:06.697 "data_size": 7936 00:28:06.697 }, 00:28:06.697 { 00:28:06.697 "name": "BaseBdev2", 00:28:06.697 "uuid": "568232b3-b37b-5786-988e-9da1a7f0f0a9", 00:28:06.697 "is_configured": true, 00:28:06.697 "data_offset": 256, 00:28:06.697 "data_size": 7936 00:28:06.697 } 00:28:06.697 ] 00:28:06.697 }' 00:28:06.697 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:06.697 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:06.697 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:06.697 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:06.697 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@798 -- # killprocess 1009603 00:28:06.697 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 1009603 ']' 00:28:06.697 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 1009603 00:28:06.697 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:28:06.697 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:06.697 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1009603 00:28:06.956 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:06.956 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:06.956 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1009603' 00:28:06.956 killing process with pid 1009603 00:28:06.956 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 1009603 00:28:06.956 Received shutdown signal, test time was about 60.000000 seconds 00:28:06.956 00:28:06.956 Latency(us) 00:28:06.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.956 =================================================================================================================== 00:28:06.956 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:06.956 [2024-07-25 13:28:17.217715] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:06.956 [2024-07-25 13:28:17.217795] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:06.956 [2024-07-25 13:28:17.217838] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:06.956 [2024-07-25 13:28:17.217850] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20a7070 name raid_bdev1, state offline 00:28:06.956 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 1009603 00:28:06.956 [2024-07-25 13:28:17.245975] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:06.956 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@800 -- # return 0 00:28:06.956 00:28:06.956 real 0m30.434s 00:28:06.956 user 0m47.004s 00:28:06.956 sys 0m5.007s 00:28:06.956 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.956 13:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:06.956 ************************************ 00:28:06.956 END TEST raid_rebuild_test_sb_md_separate 00:28:06.956 ************************************ 00:28:07.215 13:28:17 bdev_raid -- bdev/bdev_raid.sh@991 -- # base_malloc_params='-m 32 -i' 00:28:07.215 13:28:17 bdev_raid -- bdev/bdev_raid.sh@992 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:28:07.215 13:28:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:07.215 13:28:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:07.215 13:28:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:07.215 ************************************ 00:28:07.215 START TEST raid_state_function_test_sb_md_interleaved 00:28:07.215 ************************************ 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=1015087 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1015087' 00:28:07.215 Process raid pid: 1015087 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 1015087 /var/tmp/spdk-raid.sock 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 1015087 ']' 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:07.215 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:07.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:07.216 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:07.216 13:28:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.216 [2024-07-25 13:28:17.588718] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:28:07.216 [2024-07-25 13:28:17.588775] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:07.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.216 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:07.475 [2024-07-25 13:28:17.721826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.475 [2024-07-25 13:28:17.808646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.475 [2024-07-25 13:28:17.872055] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:07.475 [2024-07-25 13:28:17.872086] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:08.043 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:08.043 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:28:08.043 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:08.301 [2024-07-25 13:28:18.691127] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:08.301 [2024-07-25 13:28:18.691172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:08.301 [2024-07-25 13:28:18.691182] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:08.301 [2024-07-25 13:28:18.691193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:08.301 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:08.301 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:08.301 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:08.301 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:08.301 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:08.301 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:08.301 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:08.301 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:08.301 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:08.301 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:08.301 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.301 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.559 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:08.559 "name": "Existed_Raid", 00:28:08.559 "uuid": "4a169cc3-de98-41a1-a616-e89137fe55c6", 00:28:08.559 "strip_size_kb": 0, 00:28:08.559 "state": "configuring", 00:28:08.559 "raid_level": "raid1", 00:28:08.559 "superblock": true, 00:28:08.559 "num_base_bdevs": 2, 00:28:08.559 "num_base_bdevs_discovered": 0, 00:28:08.559 "num_base_bdevs_operational": 2, 00:28:08.559 "base_bdevs_list": [ 00:28:08.559 { 00:28:08.559 "name": "BaseBdev1", 00:28:08.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.559 "is_configured": false, 00:28:08.559 "data_offset": 0, 00:28:08.559 "data_size": 0 00:28:08.559 }, 00:28:08.559 { 00:28:08.559 "name": "BaseBdev2", 00:28:08.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.559 "is_configured": false, 00:28:08.559 "data_offset": 0, 00:28:08.559 "data_size": 0 00:28:08.559 } 00:28:08.559 ] 00:28:08.559 }' 00:28:08.559 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:08.559 13:28:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.127 13:28:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:09.385 [2024-07-25 13:28:19.701731] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:09.385 [2024-07-25 13:28:19.701760] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf94f20 name Existed_Raid, state configuring 00:28:09.386 13:28:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:09.645 [2024-07-25 13:28:19.930349] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:09.645 [2024-07-25 13:28:19.930376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:09.645 [2024-07-25 13:28:19.930385] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:09.645 [2024-07-25 13:28:19.930396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:09.645 13:28:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:28:09.903 [2024-07-25 13:28:20.164449] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:09.903 BaseBdev1 00:28:09.903 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:09.903 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:09.903 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:09.903 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:28:09.903 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:09.903 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:09.903 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:10.161 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:10.420 [ 00:28:10.420 { 00:28:10.420 "name": "BaseBdev1", 00:28:10.420 "aliases": [ 00:28:10.420 "0ddfdbdc-260f-4272-880e-067816a80d36" 00:28:10.420 ], 00:28:10.420 "product_name": "Malloc disk", 00:28:10.420 "block_size": 4128, 00:28:10.420 "num_blocks": 8192, 00:28:10.420 "uuid": "0ddfdbdc-260f-4272-880e-067816a80d36", 00:28:10.420 "md_size": 32, 00:28:10.420 "md_interleave": true, 00:28:10.420 "dif_type": 0, 00:28:10.420 "assigned_rate_limits": { 00:28:10.420 "rw_ios_per_sec": 0, 00:28:10.420 "rw_mbytes_per_sec": 0, 00:28:10.420 "r_mbytes_per_sec": 0, 00:28:10.420 "w_mbytes_per_sec": 0 00:28:10.420 }, 00:28:10.420 "claimed": true, 00:28:10.420 "claim_type": "exclusive_write", 00:28:10.420 "zoned": false, 00:28:10.420 "supported_io_types": { 00:28:10.420 "read": true, 00:28:10.420 "write": true, 00:28:10.420 "unmap": true, 00:28:10.420 "flush": true, 00:28:10.421 "reset": true, 00:28:10.421 "nvme_admin": false, 00:28:10.421 "nvme_io": false, 00:28:10.421 "nvme_io_md": false, 00:28:10.421 "write_zeroes": true, 00:28:10.421 "zcopy": true, 00:28:10.421 "get_zone_info": false, 00:28:10.421 "zone_management": false, 00:28:10.421 "zone_append": false, 00:28:10.421 "compare": false, 00:28:10.421 "compare_and_write": false, 00:28:10.421 "abort": true, 00:28:10.421 "seek_hole": false, 00:28:10.421 "seek_data": false, 00:28:10.421 "copy": true, 00:28:10.421 "nvme_iov_md": false 00:28:10.421 }, 00:28:10.421 "memory_domains": [ 00:28:10.421 { 00:28:10.421 "dma_device_id": "system", 00:28:10.421 "dma_device_type": 1 00:28:10.421 }, 00:28:10.421 { 00:28:10.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.421 "dma_device_type": 2 00:28:10.421 } 00:28:10.421 ], 00:28:10.421 "driver_specific": {} 00:28:10.421 } 00:28:10.421 ] 00:28:10.421 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:28:10.421 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:10.421 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:10.421 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:10.421 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:10.421 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:10.421 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:10.421 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:10.421 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:10.421 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:10.421 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:10.421 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.679 13:28:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:10.679 13:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:10.679 "name": "Existed_Raid", 00:28:10.679 "uuid": "048cf408-491e-4924-975d-b7fe800f83c4", 00:28:10.679 "strip_size_kb": 0, 00:28:10.679 "state": "configuring", 00:28:10.679 "raid_level": "raid1", 00:28:10.679 "superblock": true, 00:28:10.679 "num_base_bdevs": 2, 00:28:10.679 "num_base_bdevs_discovered": 1, 00:28:10.680 "num_base_bdevs_operational": 2, 00:28:10.680 "base_bdevs_list": [ 00:28:10.680 { 00:28:10.680 "name": "BaseBdev1", 00:28:10.680 "uuid": "0ddfdbdc-260f-4272-880e-067816a80d36", 00:28:10.680 "is_configured": true, 00:28:10.680 "data_offset": 256, 00:28:10.680 "data_size": 7936 00:28:10.680 }, 00:28:10.680 { 00:28:10.680 "name": "BaseBdev2", 00:28:10.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.680 "is_configured": false, 00:28:10.680 "data_offset": 0, 00:28:10.680 "data_size": 0 00:28:10.680 } 00:28:10.680 ] 00:28:10.680 }' 00:28:10.680 13:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:10.680 13:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.615 13:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:11.874 [2024-07-25 13:28:22.173930] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:11.874 [2024-07-25 13:28:22.173969] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf94810 name Existed_Raid, state configuring 00:28:11.874 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:12.171 [2024-07-25 13:28:22.390533] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:12.171 [2024-07-25 13:28:22.391909] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:12.171 [2024-07-25 13:28:22.391939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:12.172 "name": "Existed_Raid", 00:28:12.172 "uuid": "94279661-5f10-42ac-bcb4-c61015bba10e", 00:28:12.172 "strip_size_kb": 0, 00:28:12.172 "state": "configuring", 00:28:12.172 "raid_level": "raid1", 00:28:12.172 "superblock": true, 00:28:12.172 "num_base_bdevs": 2, 00:28:12.172 "num_base_bdevs_discovered": 1, 00:28:12.172 "num_base_bdevs_operational": 2, 00:28:12.172 "base_bdevs_list": [ 00:28:12.172 { 00:28:12.172 "name": "BaseBdev1", 00:28:12.172 "uuid": "0ddfdbdc-260f-4272-880e-067816a80d36", 00:28:12.172 "is_configured": true, 00:28:12.172 "data_offset": 256, 00:28:12.172 "data_size": 7936 00:28:12.172 }, 00:28:12.172 { 00:28:12.172 "name": "BaseBdev2", 00:28:12.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.172 "is_configured": false, 00:28:12.172 "data_offset": 0, 00:28:12.172 "data_size": 0 00:28:12.172 } 00:28:12.172 ] 00:28:12.172 }' 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:12.172 13:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.763 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:28:13.022 [2024-07-25 13:28:23.428530] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:13.022 [2024-07-25 13:28:23.428647] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xf96700 00:28:13.022 [2024-07-25 13:28:23.428659] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:13.022 [2024-07-25 13:28:23.428716] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf93f10 00:28:13.022 [2024-07-25 13:28:23.428784] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf96700 00:28:13.022 [2024-07-25 13:28:23.428793] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf96700 00:28:13.022 [2024-07-25 13:28:23.428843] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:13.022 BaseBdev2 00:28:13.022 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:13.022 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:13.022 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:13.022 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:28:13.022 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:13.022 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:13.022 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:13.281 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:13.540 [ 00:28:13.540 { 00:28:13.540 "name": "BaseBdev2", 00:28:13.540 "aliases": [ 00:28:13.540 "29325a3c-48e6-4ed4-9df4-38c3a04c3fa6" 00:28:13.540 ], 00:28:13.540 "product_name": "Malloc disk", 00:28:13.540 "block_size": 4128, 00:28:13.540 "num_blocks": 8192, 00:28:13.540 "uuid": "29325a3c-48e6-4ed4-9df4-38c3a04c3fa6", 00:28:13.540 "md_size": 32, 00:28:13.540 "md_interleave": true, 00:28:13.540 "dif_type": 0, 00:28:13.540 "assigned_rate_limits": { 00:28:13.540 "rw_ios_per_sec": 0, 00:28:13.540 "rw_mbytes_per_sec": 0, 00:28:13.540 "r_mbytes_per_sec": 0, 00:28:13.540 "w_mbytes_per_sec": 0 00:28:13.540 }, 00:28:13.540 "claimed": true, 00:28:13.540 "claim_type": "exclusive_write", 00:28:13.540 "zoned": false, 00:28:13.540 "supported_io_types": { 00:28:13.540 "read": true, 00:28:13.540 "write": true, 00:28:13.540 "unmap": true, 00:28:13.540 "flush": true, 00:28:13.540 "reset": true, 00:28:13.540 "nvme_admin": false, 00:28:13.540 "nvme_io": false, 00:28:13.540 "nvme_io_md": false, 00:28:13.540 "write_zeroes": true, 00:28:13.540 "zcopy": true, 00:28:13.540 "get_zone_info": false, 00:28:13.540 "zone_management": false, 00:28:13.540 "zone_append": false, 00:28:13.540 "compare": false, 00:28:13.540 "compare_and_write": false, 00:28:13.540 "abort": true, 00:28:13.540 "seek_hole": false, 00:28:13.540 "seek_data": false, 00:28:13.540 "copy": true, 00:28:13.540 "nvme_iov_md": false 00:28:13.540 }, 00:28:13.540 "memory_domains": [ 00:28:13.540 { 00:28:13.540 "dma_device_id": "system", 00:28:13.540 "dma_device_type": 1 00:28:13.540 }, 00:28:13.540 { 00:28:13.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.540 "dma_device_type": 2 00:28:13.540 } 00:28:13.540 ], 00:28:13.540 "driver_specific": {} 00:28:13.540 } 00:28:13.540 ] 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.540 13:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:13.799 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:13.799 "name": "Existed_Raid", 00:28:13.799 "uuid": "94279661-5f10-42ac-bcb4-c61015bba10e", 00:28:13.799 "strip_size_kb": 0, 00:28:13.799 "state": "online", 00:28:13.799 "raid_level": "raid1", 00:28:13.799 "superblock": true, 00:28:13.799 "num_base_bdevs": 2, 00:28:13.799 "num_base_bdevs_discovered": 2, 00:28:13.799 "num_base_bdevs_operational": 2, 00:28:13.799 "base_bdevs_list": [ 00:28:13.799 { 00:28:13.799 "name": "BaseBdev1", 00:28:13.799 "uuid": "0ddfdbdc-260f-4272-880e-067816a80d36", 00:28:13.799 "is_configured": true, 00:28:13.799 "data_offset": 256, 00:28:13.799 "data_size": 7936 00:28:13.799 }, 00:28:13.799 { 00:28:13.799 "name": "BaseBdev2", 00:28:13.799 "uuid": "29325a3c-48e6-4ed4-9df4-38c3a04c3fa6", 00:28:13.799 "is_configured": true, 00:28:13.799 "data_offset": 256, 00:28:13.799 "data_size": 7936 00:28:13.799 } 00:28:13.799 ] 00:28:13.799 }' 00:28:13.799 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:13.799 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.368 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:14.368 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:14.368 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:14.368 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:14.368 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:14.368 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:14.368 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:14.368 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:14.627 [2024-07-25 13:28:24.920716] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:14.627 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:14.627 "name": "Existed_Raid", 00:28:14.627 "aliases": [ 00:28:14.627 "94279661-5f10-42ac-bcb4-c61015bba10e" 00:28:14.627 ], 00:28:14.627 "product_name": "Raid Volume", 00:28:14.627 "block_size": 4128, 00:28:14.627 "num_blocks": 7936, 00:28:14.627 "uuid": "94279661-5f10-42ac-bcb4-c61015bba10e", 00:28:14.627 "md_size": 32, 00:28:14.627 "md_interleave": true, 00:28:14.627 "dif_type": 0, 00:28:14.627 "assigned_rate_limits": { 00:28:14.627 "rw_ios_per_sec": 0, 00:28:14.627 "rw_mbytes_per_sec": 0, 00:28:14.627 "r_mbytes_per_sec": 0, 00:28:14.627 "w_mbytes_per_sec": 0 00:28:14.627 }, 00:28:14.627 "claimed": false, 00:28:14.627 "zoned": false, 00:28:14.627 "supported_io_types": { 00:28:14.627 "read": true, 00:28:14.627 "write": true, 00:28:14.627 "unmap": false, 00:28:14.627 "flush": false, 00:28:14.627 "reset": true, 00:28:14.627 "nvme_admin": false, 00:28:14.627 "nvme_io": false, 00:28:14.627 "nvme_io_md": false, 00:28:14.627 "write_zeroes": true, 00:28:14.627 "zcopy": false, 00:28:14.627 "get_zone_info": false, 00:28:14.627 "zone_management": false, 00:28:14.627 "zone_append": false, 00:28:14.627 "compare": false, 00:28:14.627 "compare_and_write": false, 00:28:14.627 "abort": false, 00:28:14.627 "seek_hole": false, 00:28:14.627 "seek_data": false, 00:28:14.627 "copy": false, 00:28:14.627 "nvme_iov_md": false 00:28:14.627 }, 00:28:14.627 "memory_domains": [ 00:28:14.627 { 00:28:14.627 "dma_device_id": "system", 00:28:14.627 "dma_device_type": 1 00:28:14.627 }, 00:28:14.627 { 00:28:14.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:14.627 "dma_device_type": 2 00:28:14.627 }, 00:28:14.627 { 00:28:14.627 "dma_device_id": "system", 00:28:14.627 "dma_device_type": 1 00:28:14.627 }, 00:28:14.627 { 00:28:14.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:14.627 "dma_device_type": 2 00:28:14.628 } 00:28:14.628 ], 00:28:14.628 "driver_specific": { 00:28:14.628 "raid": { 00:28:14.628 "uuid": "94279661-5f10-42ac-bcb4-c61015bba10e", 00:28:14.628 "strip_size_kb": 0, 00:28:14.628 "state": "online", 00:28:14.628 "raid_level": "raid1", 00:28:14.628 "superblock": true, 00:28:14.628 "num_base_bdevs": 2, 00:28:14.628 "num_base_bdevs_discovered": 2, 00:28:14.628 "num_base_bdevs_operational": 2, 00:28:14.628 "base_bdevs_list": [ 00:28:14.628 { 00:28:14.628 "name": "BaseBdev1", 00:28:14.628 "uuid": "0ddfdbdc-260f-4272-880e-067816a80d36", 00:28:14.628 "is_configured": true, 00:28:14.628 "data_offset": 256, 00:28:14.628 "data_size": 7936 00:28:14.628 }, 00:28:14.628 { 00:28:14.628 "name": "BaseBdev2", 00:28:14.628 "uuid": "29325a3c-48e6-4ed4-9df4-38c3a04c3fa6", 00:28:14.628 "is_configured": true, 00:28:14.628 "data_offset": 256, 00:28:14.628 "data_size": 7936 00:28:14.628 } 00:28:14.628 ] 00:28:14.628 } 00:28:14.628 } 00:28:14.628 }' 00:28:14.628 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:14.628 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:14.628 BaseBdev2' 00:28:14.628 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:14.628 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:14.628 13:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:14.887 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:14.887 "name": "BaseBdev1", 00:28:14.887 "aliases": [ 00:28:14.887 "0ddfdbdc-260f-4272-880e-067816a80d36" 00:28:14.887 ], 00:28:14.887 "product_name": "Malloc disk", 00:28:14.887 "block_size": 4128, 00:28:14.887 "num_blocks": 8192, 00:28:14.887 "uuid": "0ddfdbdc-260f-4272-880e-067816a80d36", 00:28:14.887 "md_size": 32, 00:28:14.887 "md_interleave": true, 00:28:14.887 "dif_type": 0, 00:28:14.887 "assigned_rate_limits": { 00:28:14.887 "rw_ios_per_sec": 0, 00:28:14.887 "rw_mbytes_per_sec": 0, 00:28:14.887 "r_mbytes_per_sec": 0, 00:28:14.887 "w_mbytes_per_sec": 0 00:28:14.887 }, 00:28:14.887 "claimed": true, 00:28:14.887 "claim_type": "exclusive_write", 00:28:14.887 "zoned": false, 00:28:14.887 "supported_io_types": { 00:28:14.887 "read": true, 00:28:14.887 "write": true, 00:28:14.887 "unmap": true, 00:28:14.887 "flush": true, 00:28:14.887 "reset": true, 00:28:14.887 "nvme_admin": false, 00:28:14.887 "nvme_io": false, 00:28:14.887 "nvme_io_md": false, 00:28:14.887 "write_zeroes": true, 00:28:14.887 "zcopy": true, 00:28:14.887 "get_zone_info": false, 00:28:14.887 "zone_management": false, 00:28:14.887 "zone_append": false, 00:28:14.887 "compare": false, 00:28:14.887 "compare_and_write": false, 00:28:14.887 "abort": true, 00:28:14.887 "seek_hole": false, 00:28:14.887 "seek_data": false, 00:28:14.887 "copy": true, 00:28:14.887 "nvme_iov_md": false 00:28:14.887 }, 00:28:14.887 "memory_domains": [ 00:28:14.887 { 00:28:14.887 "dma_device_id": "system", 00:28:14.887 "dma_device_type": 1 00:28:14.887 }, 00:28:14.887 { 00:28:14.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:14.887 "dma_device_type": 2 00:28:14.887 } 00:28:14.887 ], 00:28:14.887 "driver_specific": {} 00:28:14.887 }' 00:28:14.887 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:14.887 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:14.887 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:14.887 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:14.887 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:15.146 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:15.146 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:15.146 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:15.146 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:15.146 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:15.146 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:15.146 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:15.146 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:15.146 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:15.146 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:15.405 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:15.405 "name": "BaseBdev2", 00:28:15.405 "aliases": [ 00:28:15.405 "29325a3c-48e6-4ed4-9df4-38c3a04c3fa6" 00:28:15.405 ], 00:28:15.405 "product_name": "Malloc disk", 00:28:15.405 "block_size": 4128, 00:28:15.405 "num_blocks": 8192, 00:28:15.405 "uuid": "29325a3c-48e6-4ed4-9df4-38c3a04c3fa6", 00:28:15.405 "md_size": 32, 00:28:15.405 "md_interleave": true, 00:28:15.405 "dif_type": 0, 00:28:15.405 "assigned_rate_limits": { 00:28:15.405 "rw_ios_per_sec": 0, 00:28:15.405 "rw_mbytes_per_sec": 0, 00:28:15.405 "r_mbytes_per_sec": 0, 00:28:15.405 "w_mbytes_per_sec": 0 00:28:15.405 }, 00:28:15.405 "claimed": true, 00:28:15.405 "claim_type": "exclusive_write", 00:28:15.405 "zoned": false, 00:28:15.405 "supported_io_types": { 00:28:15.405 "read": true, 00:28:15.405 "write": true, 00:28:15.405 "unmap": true, 00:28:15.405 "flush": true, 00:28:15.405 "reset": true, 00:28:15.405 "nvme_admin": false, 00:28:15.405 "nvme_io": false, 00:28:15.405 "nvme_io_md": false, 00:28:15.405 "write_zeroes": true, 00:28:15.405 "zcopy": true, 00:28:15.405 "get_zone_info": false, 00:28:15.405 "zone_management": false, 00:28:15.405 "zone_append": false, 00:28:15.405 "compare": false, 00:28:15.405 "compare_and_write": false, 00:28:15.405 "abort": true, 00:28:15.405 "seek_hole": false, 00:28:15.405 "seek_data": false, 00:28:15.405 "copy": true, 00:28:15.405 "nvme_iov_md": false 00:28:15.405 }, 00:28:15.405 "memory_domains": [ 00:28:15.405 { 00:28:15.405 "dma_device_id": "system", 00:28:15.405 "dma_device_type": 1 00:28:15.405 }, 00:28:15.405 { 00:28:15.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.405 "dma_device_type": 2 00:28:15.405 } 00:28:15.405 ], 00:28:15.405 "driver_specific": {} 00:28:15.405 }' 00:28:15.405 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:15.405 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:15.405 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:15.406 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:15.665 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:15.665 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:15.665 13:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:15.665 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:15.665 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:15.665 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:15.665 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:15.665 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:15.665 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:15.954 [2024-07-25 13:28:26.340256] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.954 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:16.214 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:16.215 "name": "Existed_Raid", 00:28:16.215 "uuid": "94279661-5f10-42ac-bcb4-c61015bba10e", 00:28:16.215 "strip_size_kb": 0, 00:28:16.215 "state": "online", 00:28:16.215 "raid_level": "raid1", 00:28:16.215 "superblock": true, 00:28:16.215 "num_base_bdevs": 2, 00:28:16.215 "num_base_bdevs_discovered": 1, 00:28:16.215 "num_base_bdevs_operational": 1, 00:28:16.215 "base_bdevs_list": [ 00:28:16.215 { 00:28:16.215 "name": null, 00:28:16.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.215 "is_configured": false, 00:28:16.215 "data_offset": 256, 00:28:16.215 "data_size": 7936 00:28:16.215 }, 00:28:16.215 { 00:28:16.215 "name": "BaseBdev2", 00:28:16.215 "uuid": "29325a3c-48e6-4ed4-9df4-38c3a04c3fa6", 00:28:16.215 "is_configured": true, 00:28:16.215 "data_offset": 256, 00:28:16.215 "data_size": 7936 00:28:16.215 } 00:28:16.215 ] 00:28:16.215 }' 00:28:16.215 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:16.215 13:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.783 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:16.783 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:16.783 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.783 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:17.042 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:17.042 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:17.042 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:17.302 [2024-07-25 13:28:27.584543] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:17.302 [2024-07-25 13:28:27.584617] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:17.302 [2024-07-25 13:28:27.595202] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:17.302 [2024-07-25 13:28:27.595231] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:17.302 [2024-07-25 13:28:27.595242] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf96700 name Existed_Raid, state offline 00:28:17.302 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:17.302 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:17.302 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.302 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 1015087 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 1015087 ']' 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 1015087 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1015087 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1015087' 00:28:17.561 killing process with pid 1015087 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 1015087 00:28:17.561 [2024-07-25 13:28:27.900770] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:17.561 13:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 1015087 00:28:17.561 [2024-07-25 13:28:27.901631] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:17.821 13:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:28:17.821 00:28:17.821 real 0m10.570s 00:28:17.821 user 0m18.843s 00:28:17.821 sys 0m1.955s 00:28:17.821 13:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:17.821 13:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:17.821 ************************************ 00:28:17.821 END TEST raid_state_function_test_sb_md_interleaved 00:28:17.821 ************************************ 00:28:17.821 13:28:28 bdev_raid -- bdev/bdev_raid.sh@993 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:28:17.821 13:28:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:17.821 13:28:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:17.821 13:28:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:17.821 ************************************ 00:28:17.821 START TEST raid_superblock_test_md_interleaved 00:28:17.821 ************************************ 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@414 -- # local strip_size 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@427 -- # raid_pid=1017151 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@428 -- # waitforlisten 1017151 /var/tmp/spdk-raid.sock 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 1017151 ']' 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:17.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:17.821 13:28:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:17.821 [2024-07-25 13:28:28.226403] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:28:17.821 [2024-07-25 13:28:28.226460] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017151 ] 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:17.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:17.821 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:18.080 [2024-07-25 13:28:28.356659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.080 [2024-07-25 13:28:28.442462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.080 [2024-07-25 13:28:28.502707] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:18.080 [2024-07-25 13:28:28.502749] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:18.648 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:18.648 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:28:18.648 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:28:18.648 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:18.648 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:28:18.648 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:28:18.648 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:18.648 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:18.648 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:18.648 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:18.648 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:28:18.907 malloc1 00:28:18.907 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:19.166 [2024-07-25 13:28:29.504228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:19.166 [2024-07-25 13:28:29.504271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:19.166 [2024-07-25 13:28:29.504288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23f4350 00:28:19.166 [2024-07-25 13:28:29.504300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:19.166 [2024-07-25 13:28:29.505653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:19.166 [2024-07-25 13:28:29.505680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:19.166 pt1 00:28:19.167 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:19.167 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:19.167 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:28:19.167 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:28:19.167 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:19.167 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:19.167 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:19.167 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:19.167 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:28:19.425 malloc2 00:28:19.425 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:19.684 [2024-07-25 13:28:29.966162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:19.684 [2024-07-25 13:28:29.966203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:19.684 [2024-07-25 13:28:29.966218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2250570 00:28:19.684 [2024-07-25 13:28:29.966229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:19.684 [2024-07-25 13:28:29.967426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:19.684 [2024-07-25 13:28:29.967450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:19.684 pt2 00:28:19.684 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:19.684 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:19.684 13:28:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:28:19.943 [2024-07-25 13:28:30.194768] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:19.943 [2024-07-25 13:28:30.195955] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:19.943 [2024-07-25 13:28:30.196071] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x23f4b20 00:28:19.943 [2024-07-25 13:28:30.196083] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:19.943 [2024-07-25 13:28:30.196165] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2254560 00:28:19.943 [2024-07-25 13:28:30.196243] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23f4b20 00:28:19.943 [2024-07-25 13:28:30.196252] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23f4b20 00:28:19.943 [2024-07-25 13:28:30.196316] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:19.943 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:19.943 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:19.943 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:19.943 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:19.943 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:19.943 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:19.943 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:19.943 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:19.943 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:19.943 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:19.943 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.943 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.202 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:20.202 "name": "raid_bdev1", 00:28:20.202 "uuid": "27f32032-3448-46dd-94ad-64eb66138727", 00:28:20.202 "strip_size_kb": 0, 00:28:20.202 "state": "online", 00:28:20.202 "raid_level": "raid1", 00:28:20.202 "superblock": true, 00:28:20.202 "num_base_bdevs": 2, 00:28:20.202 "num_base_bdevs_discovered": 2, 00:28:20.202 "num_base_bdevs_operational": 2, 00:28:20.202 "base_bdevs_list": [ 00:28:20.202 { 00:28:20.202 "name": "pt1", 00:28:20.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:20.202 "is_configured": true, 00:28:20.202 "data_offset": 256, 00:28:20.202 "data_size": 7936 00:28:20.202 }, 00:28:20.202 { 00:28:20.202 "name": "pt2", 00:28:20.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:20.202 "is_configured": true, 00:28:20.202 "data_offset": 256, 00:28:20.202 "data_size": 7936 00:28:20.202 } 00:28:20.202 ] 00:28:20.202 }' 00:28:20.202 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:20.202 13:28:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:20.770 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:28:20.770 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:20.770 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:20.770 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:20.770 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:20.771 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:20.771 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:20.771 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:20.771 [2024-07-25 13:28:31.237853] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:21.029 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:21.029 "name": "raid_bdev1", 00:28:21.029 "aliases": [ 00:28:21.029 "27f32032-3448-46dd-94ad-64eb66138727" 00:28:21.029 ], 00:28:21.029 "product_name": "Raid Volume", 00:28:21.029 "block_size": 4128, 00:28:21.029 "num_blocks": 7936, 00:28:21.029 "uuid": "27f32032-3448-46dd-94ad-64eb66138727", 00:28:21.029 "md_size": 32, 00:28:21.029 "md_interleave": true, 00:28:21.029 "dif_type": 0, 00:28:21.029 "assigned_rate_limits": { 00:28:21.029 "rw_ios_per_sec": 0, 00:28:21.029 "rw_mbytes_per_sec": 0, 00:28:21.029 "r_mbytes_per_sec": 0, 00:28:21.029 "w_mbytes_per_sec": 0 00:28:21.029 }, 00:28:21.029 "claimed": false, 00:28:21.029 "zoned": false, 00:28:21.029 "supported_io_types": { 00:28:21.029 "read": true, 00:28:21.029 "write": true, 00:28:21.029 "unmap": false, 00:28:21.029 "flush": false, 00:28:21.029 "reset": true, 00:28:21.030 "nvme_admin": false, 00:28:21.030 "nvme_io": false, 00:28:21.030 "nvme_io_md": false, 00:28:21.030 "write_zeroes": true, 00:28:21.030 "zcopy": false, 00:28:21.030 "get_zone_info": false, 00:28:21.030 "zone_management": false, 00:28:21.030 "zone_append": false, 00:28:21.030 "compare": false, 00:28:21.030 "compare_and_write": false, 00:28:21.030 "abort": false, 00:28:21.030 "seek_hole": false, 00:28:21.030 "seek_data": false, 00:28:21.030 "copy": false, 00:28:21.030 "nvme_iov_md": false 00:28:21.030 }, 00:28:21.030 "memory_domains": [ 00:28:21.030 { 00:28:21.030 "dma_device_id": "system", 00:28:21.030 "dma_device_type": 1 00:28:21.030 }, 00:28:21.030 { 00:28:21.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:21.030 "dma_device_type": 2 00:28:21.030 }, 00:28:21.030 { 00:28:21.030 "dma_device_id": "system", 00:28:21.030 "dma_device_type": 1 00:28:21.030 }, 00:28:21.030 { 00:28:21.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:21.030 "dma_device_type": 2 00:28:21.030 } 00:28:21.030 ], 00:28:21.030 "driver_specific": { 00:28:21.030 "raid": { 00:28:21.030 "uuid": "27f32032-3448-46dd-94ad-64eb66138727", 00:28:21.030 "strip_size_kb": 0, 00:28:21.030 "state": "online", 00:28:21.030 "raid_level": "raid1", 00:28:21.030 "superblock": true, 00:28:21.030 "num_base_bdevs": 2, 00:28:21.030 "num_base_bdevs_discovered": 2, 00:28:21.030 "num_base_bdevs_operational": 2, 00:28:21.030 "base_bdevs_list": [ 00:28:21.030 { 00:28:21.030 "name": "pt1", 00:28:21.030 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:21.030 "is_configured": true, 00:28:21.030 "data_offset": 256, 00:28:21.030 "data_size": 7936 00:28:21.030 }, 00:28:21.030 { 00:28:21.030 "name": "pt2", 00:28:21.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:21.030 "is_configured": true, 00:28:21.030 "data_offset": 256, 00:28:21.030 "data_size": 7936 00:28:21.030 } 00:28:21.030 ] 00:28:21.030 } 00:28:21.030 } 00:28:21.030 }' 00:28:21.030 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:21.030 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:21.030 pt2' 00:28:21.030 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:21.030 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:21.030 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:21.288 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:21.288 "name": "pt1", 00:28:21.288 "aliases": [ 00:28:21.288 "00000000-0000-0000-0000-000000000001" 00:28:21.288 ], 00:28:21.288 "product_name": "passthru", 00:28:21.288 "block_size": 4128, 00:28:21.288 "num_blocks": 8192, 00:28:21.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:21.288 "md_size": 32, 00:28:21.288 "md_interleave": true, 00:28:21.288 "dif_type": 0, 00:28:21.288 "assigned_rate_limits": { 00:28:21.288 "rw_ios_per_sec": 0, 00:28:21.288 "rw_mbytes_per_sec": 0, 00:28:21.288 "r_mbytes_per_sec": 0, 00:28:21.288 "w_mbytes_per_sec": 0 00:28:21.288 }, 00:28:21.288 "claimed": true, 00:28:21.288 "claim_type": "exclusive_write", 00:28:21.288 "zoned": false, 00:28:21.288 "supported_io_types": { 00:28:21.288 "read": true, 00:28:21.288 "write": true, 00:28:21.288 "unmap": true, 00:28:21.288 "flush": true, 00:28:21.288 "reset": true, 00:28:21.288 "nvme_admin": false, 00:28:21.288 "nvme_io": false, 00:28:21.288 "nvme_io_md": false, 00:28:21.288 "write_zeroes": true, 00:28:21.288 "zcopy": true, 00:28:21.289 "get_zone_info": false, 00:28:21.289 "zone_management": false, 00:28:21.289 "zone_append": false, 00:28:21.289 "compare": false, 00:28:21.289 "compare_and_write": false, 00:28:21.289 "abort": true, 00:28:21.289 "seek_hole": false, 00:28:21.289 "seek_data": false, 00:28:21.289 "copy": true, 00:28:21.289 "nvme_iov_md": false 00:28:21.289 }, 00:28:21.289 "memory_domains": [ 00:28:21.289 { 00:28:21.289 "dma_device_id": "system", 00:28:21.289 "dma_device_type": 1 00:28:21.289 }, 00:28:21.289 { 00:28:21.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:21.289 "dma_device_type": 2 00:28:21.289 } 00:28:21.289 ], 00:28:21.289 "driver_specific": { 00:28:21.289 "passthru": { 00:28:21.289 "name": "pt1", 00:28:21.289 "base_bdev_name": "malloc1" 00:28:21.289 } 00:28:21.289 } 00:28:21.289 }' 00:28:21.289 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:21.289 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:21.289 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:21.289 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:21.289 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:21.289 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:21.289 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:21.289 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:21.547 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:21.547 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:21.547 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:21.547 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:21.547 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:21.547 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:21.547 13:28:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:21.806 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:21.806 "name": "pt2", 00:28:21.806 "aliases": [ 00:28:21.806 "00000000-0000-0000-0000-000000000002" 00:28:21.806 ], 00:28:21.806 "product_name": "passthru", 00:28:21.806 "block_size": 4128, 00:28:21.806 "num_blocks": 8192, 00:28:21.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:21.806 "md_size": 32, 00:28:21.806 "md_interleave": true, 00:28:21.806 "dif_type": 0, 00:28:21.806 "assigned_rate_limits": { 00:28:21.806 "rw_ios_per_sec": 0, 00:28:21.806 "rw_mbytes_per_sec": 0, 00:28:21.806 "r_mbytes_per_sec": 0, 00:28:21.806 "w_mbytes_per_sec": 0 00:28:21.806 }, 00:28:21.806 "claimed": true, 00:28:21.806 "claim_type": "exclusive_write", 00:28:21.806 "zoned": false, 00:28:21.806 "supported_io_types": { 00:28:21.806 "read": true, 00:28:21.806 "write": true, 00:28:21.806 "unmap": true, 00:28:21.806 "flush": true, 00:28:21.806 "reset": true, 00:28:21.806 "nvme_admin": false, 00:28:21.806 "nvme_io": false, 00:28:21.806 "nvme_io_md": false, 00:28:21.806 "write_zeroes": true, 00:28:21.806 "zcopy": true, 00:28:21.806 "get_zone_info": false, 00:28:21.806 "zone_management": false, 00:28:21.806 "zone_append": false, 00:28:21.806 "compare": false, 00:28:21.806 "compare_and_write": false, 00:28:21.806 "abort": true, 00:28:21.807 "seek_hole": false, 00:28:21.807 "seek_data": false, 00:28:21.807 "copy": true, 00:28:21.807 "nvme_iov_md": false 00:28:21.807 }, 00:28:21.807 "memory_domains": [ 00:28:21.807 { 00:28:21.807 "dma_device_id": "system", 00:28:21.807 "dma_device_type": 1 00:28:21.807 }, 00:28:21.807 { 00:28:21.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:21.807 "dma_device_type": 2 00:28:21.807 } 00:28:21.807 ], 00:28:21.807 "driver_specific": { 00:28:21.807 "passthru": { 00:28:21.807 "name": "pt2", 00:28:21.807 "base_bdev_name": "malloc2" 00:28:21.807 } 00:28:21.807 } 00:28:21.807 }' 00:28:21.807 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:21.807 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:21.807 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:21.807 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:21.807 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:21.807 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:21.807 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:22.065 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:22.065 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:22.065 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:22.065 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:22.065 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:22.065 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:22.065 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:28:22.325 [2024-07-25 13:28:32.645550] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:22.325 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=27f32032-3448-46dd-94ad-64eb66138727 00:28:22.325 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' -z 27f32032-3448-46dd-94ad-64eb66138727 ']' 00:28:22.325 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:22.584 [2024-07-25 13:28:32.873900] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:22.584 [2024-07-25 13:28:32.873917] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:22.584 [2024-07-25 13:28:32.873964] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:22.584 [2024-07-25 13:28:32.874011] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:22.584 [2024-07-25 13:28:32.874022] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23f4b20 name raid_bdev1, state offline 00:28:22.584 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.584 13:28:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:28:22.843 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:28:22.843 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:28:22.843 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:22.843 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:23.102 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:23.102 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:23.102 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:28:23.102 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:23.360 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:28:23.360 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:23.360 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:28:23.360 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:23.360 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:23.360 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:23.360 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:23.360 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:23.360 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:23.361 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:23.361 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:23.361 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:23.361 13:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:23.619 [2024-07-25 13:28:34.016860] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:23.619 [2024-07-25 13:28:34.018105] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:23.619 [2024-07-25 13:28:34.018166] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:23.619 [2024-07-25 13:28:34.018204] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:23.619 [2024-07-25 13:28:34.018221] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:23.620 [2024-07-25 13:28:34.018231] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2253eb0 name raid_bdev1, state configuring 00:28:23.620 request: 00:28:23.620 { 00:28:23.620 "name": "raid_bdev1", 00:28:23.620 "raid_level": "raid1", 00:28:23.620 "base_bdevs": [ 00:28:23.620 "malloc1", 00:28:23.620 "malloc2" 00:28:23.620 ], 00:28:23.620 "superblock": false, 00:28:23.620 "method": "bdev_raid_create", 00:28:23.620 "req_id": 1 00:28:23.620 } 00:28:23.620 Got JSON-RPC error response 00:28:23.620 response: 00:28:23.620 { 00:28:23.620 "code": -17, 00:28:23.620 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:23.620 } 00:28:23.620 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:28:23.620 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:23.620 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:23.620 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:23.620 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.620 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:28:23.878 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:28:23.879 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:28:23.879 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:24.138 [2024-07-25 13:28:34.474019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:24.138 [2024-07-25 13:28:34.474063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.138 [2024-07-25 13:28:34.474081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23f5290 00:28:24.138 [2024-07-25 13:28:34.474093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.138 [2024-07-25 13:28:34.475402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.138 [2024-07-25 13:28:34.475428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:24.138 [2024-07-25 13:28:34.475469] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:24.138 [2024-07-25 13:28:34.475490] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:24.138 pt1 00:28:24.138 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:28:24.138 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:24.138 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:24.138 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:24.138 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:24.138 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:24.138 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:24.138 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:24.138 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:24.138 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:24.138 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.138 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.429 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.429 "name": "raid_bdev1", 00:28:24.429 "uuid": "27f32032-3448-46dd-94ad-64eb66138727", 00:28:24.429 "strip_size_kb": 0, 00:28:24.429 "state": "configuring", 00:28:24.429 "raid_level": "raid1", 00:28:24.429 "superblock": true, 00:28:24.429 "num_base_bdevs": 2, 00:28:24.429 "num_base_bdevs_discovered": 1, 00:28:24.429 "num_base_bdevs_operational": 2, 00:28:24.429 "base_bdevs_list": [ 00:28:24.429 { 00:28:24.429 "name": "pt1", 00:28:24.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:24.429 "is_configured": true, 00:28:24.429 "data_offset": 256, 00:28:24.429 "data_size": 7936 00:28:24.429 }, 00:28:24.429 { 00:28:24.429 "name": null, 00:28:24.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:24.429 "is_configured": false, 00:28:24.429 "data_offset": 256, 00:28:24.429 "data_size": 7936 00:28:24.429 } 00:28:24.429 ] 00:28:24.429 }' 00:28:24.429 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.429 13:28:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.026 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:28:25.026 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:28:25.026 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:25.026 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:25.026 [2024-07-25 13:28:35.504750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:25.026 [2024-07-25 13:28:35.504798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.026 [2024-07-25 13:28:35.504815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2254310 00:28:25.026 [2024-07-25 13:28:35.504826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.026 [2024-07-25 13:28:35.504980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.026 [2024-07-25 13:28:35.504995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:25.026 [2024-07-25 13:28:35.505036] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:25.026 [2024-07-25 13:28:35.505053] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:25.026 [2024-07-25 13:28:35.505129] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x22521f0 00:28:25.026 [2024-07-25 13:28:35.505146] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:25.026 [2024-07-25 13:28:35.505196] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x224f6f0 00:28:25.026 [2024-07-25 13:28:35.505263] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22521f0 00:28:25.026 [2024-07-25 13:28:35.505272] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22521f0 00:28:25.026 [2024-07-25 13:28:35.505335] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:25.026 pt2 00:28:25.284 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:28:25.284 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:25.284 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:25.284 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:25.284 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:25.284 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:25.285 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:25.285 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:25.285 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:25.285 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:25.285 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:25.285 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:25.285 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.285 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.285 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:25.285 "name": "raid_bdev1", 00:28:25.285 "uuid": "27f32032-3448-46dd-94ad-64eb66138727", 00:28:25.285 "strip_size_kb": 0, 00:28:25.285 "state": "online", 00:28:25.285 "raid_level": "raid1", 00:28:25.285 "superblock": true, 00:28:25.285 "num_base_bdevs": 2, 00:28:25.285 "num_base_bdevs_discovered": 2, 00:28:25.285 "num_base_bdevs_operational": 2, 00:28:25.285 "base_bdevs_list": [ 00:28:25.285 { 00:28:25.285 "name": "pt1", 00:28:25.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:25.285 "is_configured": true, 00:28:25.285 "data_offset": 256, 00:28:25.285 "data_size": 7936 00:28:25.285 }, 00:28:25.285 { 00:28:25.285 "name": "pt2", 00:28:25.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:25.285 "is_configured": true, 00:28:25.285 "data_offset": 256, 00:28:25.285 "data_size": 7936 00:28:25.285 } 00:28:25.285 ] 00:28:25.285 }' 00:28:25.285 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:25.285 13:28:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.852 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:28:25.852 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:25.852 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:25.852 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:25.852 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:25.852 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:25.852 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:25.852 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:26.111 [2024-07-25 13:28:36.531698] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:26.111 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:26.111 "name": "raid_bdev1", 00:28:26.111 "aliases": [ 00:28:26.111 "27f32032-3448-46dd-94ad-64eb66138727" 00:28:26.111 ], 00:28:26.111 "product_name": "Raid Volume", 00:28:26.111 "block_size": 4128, 00:28:26.111 "num_blocks": 7936, 00:28:26.111 "uuid": "27f32032-3448-46dd-94ad-64eb66138727", 00:28:26.111 "md_size": 32, 00:28:26.111 "md_interleave": true, 00:28:26.111 "dif_type": 0, 00:28:26.111 "assigned_rate_limits": { 00:28:26.111 "rw_ios_per_sec": 0, 00:28:26.111 "rw_mbytes_per_sec": 0, 00:28:26.111 "r_mbytes_per_sec": 0, 00:28:26.111 "w_mbytes_per_sec": 0 00:28:26.111 }, 00:28:26.111 "claimed": false, 00:28:26.111 "zoned": false, 00:28:26.111 "supported_io_types": { 00:28:26.111 "read": true, 00:28:26.111 "write": true, 00:28:26.111 "unmap": false, 00:28:26.111 "flush": false, 00:28:26.111 "reset": true, 00:28:26.111 "nvme_admin": false, 00:28:26.111 "nvme_io": false, 00:28:26.111 "nvme_io_md": false, 00:28:26.111 "write_zeroes": true, 00:28:26.111 "zcopy": false, 00:28:26.111 "get_zone_info": false, 00:28:26.111 "zone_management": false, 00:28:26.111 "zone_append": false, 00:28:26.111 "compare": false, 00:28:26.111 "compare_and_write": false, 00:28:26.111 "abort": false, 00:28:26.111 "seek_hole": false, 00:28:26.111 "seek_data": false, 00:28:26.111 "copy": false, 00:28:26.111 "nvme_iov_md": false 00:28:26.111 }, 00:28:26.111 "memory_domains": [ 00:28:26.111 { 00:28:26.111 "dma_device_id": "system", 00:28:26.111 "dma_device_type": 1 00:28:26.111 }, 00:28:26.111 { 00:28:26.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.111 "dma_device_type": 2 00:28:26.111 }, 00:28:26.111 { 00:28:26.111 "dma_device_id": "system", 00:28:26.111 "dma_device_type": 1 00:28:26.111 }, 00:28:26.111 { 00:28:26.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.111 "dma_device_type": 2 00:28:26.111 } 00:28:26.111 ], 00:28:26.111 "driver_specific": { 00:28:26.111 "raid": { 00:28:26.111 "uuid": "27f32032-3448-46dd-94ad-64eb66138727", 00:28:26.111 "strip_size_kb": 0, 00:28:26.111 "state": "online", 00:28:26.111 "raid_level": "raid1", 00:28:26.111 "superblock": true, 00:28:26.111 "num_base_bdevs": 2, 00:28:26.111 "num_base_bdevs_discovered": 2, 00:28:26.111 "num_base_bdevs_operational": 2, 00:28:26.111 "base_bdevs_list": [ 00:28:26.111 { 00:28:26.111 "name": "pt1", 00:28:26.111 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:26.111 "is_configured": true, 00:28:26.111 "data_offset": 256, 00:28:26.111 "data_size": 7936 00:28:26.111 }, 00:28:26.111 { 00:28:26.111 "name": "pt2", 00:28:26.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:26.111 "is_configured": true, 00:28:26.111 "data_offset": 256, 00:28:26.111 "data_size": 7936 00:28:26.111 } 00:28:26.111 ] 00:28:26.111 } 00:28:26.111 } 00:28:26.111 }' 00:28:26.111 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:26.111 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:26.111 pt2' 00:28:26.111 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:26.370 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:26.370 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:26.370 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:26.370 "name": "pt1", 00:28:26.370 "aliases": [ 00:28:26.370 "00000000-0000-0000-0000-000000000001" 00:28:26.370 ], 00:28:26.370 "product_name": "passthru", 00:28:26.370 "block_size": 4128, 00:28:26.370 "num_blocks": 8192, 00:28:26.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:26.370 "md_size": 32, 00:28:26.370 "md_interleave": true, 00:28:26.370 "dif_type": 0, 00:28:26.370 "assigned_rate_limits": { 00:28:26.370 "rw_ios_per_sec": 0, 00:28:26.370 "rw_mbytes_per_sec": 0, 00:28:26.370 "r_mbytes_per_sec": 0, 00:28:26.370 "w_mbytes_per_sec": 0 00:28:26.370 }, 00:28:26.370 "claimed": true, 00:28:26.370 "claim_type": "exclusive_write", 00:28:26.370 "zoned": false, 00:28:26.370 "supported_io_types": { 00:28:26.370 "read": true, 00:28:26.370 "write": true, 00:28:26.370 "unmap": true, 00:28:26.370 "flush": true, 00:28:26.370 "reset": true, 00:28:26.370 "nvme_admin": false, 00:28:26.370 "nvme_io": false, 00:28:26.370 "nvme_io_md": false, 00:28:26.370 "write_zeroes": true, 00:28:26.370 "zcopy": true, 00:28:26.370 "get_zone_info": false, 00:28:26.370 "zone_management": false, 00:28:26.370 "zone_append": false, 00:28:26.370 "compare": false, 00:28:26.370 "compare_and_write": false, 00:28:26.370 "abort": true, 00:28:26.370 "seek_hole": false, 00:28:26.370 "seek_data": false, 00:28:26.370 "copy": true, 00:28:26.370 "nvme_iov_md": false 00:28:26.370 }, 00:28:26.370 "memory_domains": [ 00:28:26.370 { 00:28:26.370 "dma_device_id": "system", 00:28:26.370 "dma_device_type": 1 00:28:26.370 }, 00:28:26.370 { 00:28:26.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.370 "dma_device_type": 2 00:28:26.370 } 00:28:26.370 ], 00:28:26.370 "driver_specific": { 00:28:26.370 "passthru": { 00:28:26.370 "name": "pt1", 00:28:26.370 "base_bdev_name": "malloc1" 00:28:26.370 } 00:28:26.370 } 00:28:26.370 }' 00:28:26.370 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:26.629 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:26.629 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:26.629 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:26.629 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:26.629 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:26.629 13:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:26.629 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:26.629 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:26.629 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:26.887 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:26.887 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:26.887 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:26.887 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:26.887 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:27.146 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:27.146 "name": "pt2", 00:28:27.146 "aliases": [ 00:28:27.146 "00000000-0000-0000-0000-000000000002" 00:28:27.146 ], 00:28:27.146 "product_name": "passthru", 00:28:27.146 "block_size": 4128, 00:28:27.146 "num_blocks": 8192, 00:28:27.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:27.146 "md_size": 32, 00:28:27.146 "md_interleave": true, 00:28:27.146 "dif_type": 0, 00:28:27.146 "assigned_rate_limits": { 00:28:27.146 "rw_ios_per_sec": 0, 00:28:27.146 "rw_mbytes_per_sec": 0, 00:28:27.146 "r_mbytes_per_sec": 0, 00:28:27.146 "w_mbytes_per_sec": 0 00:28:27.146 }, 00:28:27.146 "claimed": true, 00:28:27.146 "claim_type": "exclusive_write", 00:28:27.146 "zoned": false, 00:28:27.146 "supported_io_types": { 00:28:27.146 "read": true, 00:28:27.146 "write": true, 00:28:27.146 "unmap": true, 00:28:27.146 "flush": true, 00:28:27.146 "reset": true, 00:28:27.146 "nvme_admin": false, 00:28:27.146 "nvme_io": false, 00:28:27.146 "nvme_io_md": false, 00:28:27.146 "write_zeroes": true, 00:28:27.146 "zcopy": true, 00:28:27.146 "get_zone_info": false, 00:28:27.146 "zone_management": false, 00:28:27.146 "zone_append": false, 00:28:27.146 "compare": false, 00:28:27.146 "compare_and_write": false, 00:28:27.146 "abort": true, 00:28:27.146 "seek_hole": false, 00:28:27.146 "seek_data": false, 00:28:27.146 "copy": true, 00:28:27.146 "nvme_iov_md": false 00:28:27.146 }, 00:28:27.146 "memory_domains": [ 00:28:27.146 { 00:28:27.146 "dma_device_id": "system", 00:28:27.146 "dma_device_type": 1 00:28:27.146 }, 00:28:27.146 { 00:28:27.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:27.146 "dma_device_type": 2 00:28:27.146 } 00:28:27.146 ], 00:28:27.146 "driver_specific": { 00:28:27.146 "passthru": { 00:28:27.146 "name": "pt2", 00:28:27.146 "base_bdev_name": "malloc2" 00:28:27.146 } 00:28:27.146 } 00:28:27.146 }' 00:28:27.146 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:27.146 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:27.146 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:27.146 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:27.146 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:27.146 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:27.146 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:27.146 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:27.405 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:27.405 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:27.405 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:27.405 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:27.405 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:27.405 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:28:27.664 [2024-07-25 13:28:37.959463] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:27.664 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # '[' 27f32032-3448-46dd-94ad-64eb66138727 '!=' 27f32032-3448-46dd-94ad-64eb66138727 ']' 00:28:27.664 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:28:27.664 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:27.664 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:28:27.664 13:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:27.923 [2024-07-25 13:28:38.187829] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:27.923 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:27.923 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:27.923 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:27.923 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:27.923 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:27.923 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:27.923 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:27.923 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:27.923 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:27.923 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:27.923 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.923 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.182 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:28.182 "name": "raid_bdev1", 00:28:28.182 "uuid": "27f32032-3448-46dd-94ad-64eb66138727", 00:28:28.182 "strip_size_kb": 0, 00:28:28.182 "state": "online", 00:28:28.182 "raid_level": "raid1", 00:28:28.182 "superblock": true, 00:28:28.182 "num_base_bdevs": 2, 00:28:28.182 "num_base_bdevs_discovered": 1, 00:28:28.182 "num_base_bdevs_operational": 1, 00:28:28.182 "base_bdevs_list": [ 00:28:28.182 { 00:28:28.182 "name": null, 00:28:28.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.182 "is_configured": false, 00:28:28.182 "data_offset": 256, 00:28:28.182 "data_size": 7936 00:28:28.182 }, 00:28:28.182 { 00:28:28.182 "name": "pt2", 00:28:28.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:28.182 "is_configured": true, 00:28:28.182 "data_offset": 256, 00:28:28.182 "data_size": 7936 00:28:28.182 } 00:28:28.182 ] 00:28:28.182 }' 00:28:28.182 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:28.182 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:28.747 13:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:28.747 [2024-07-25 13:28:39.198468] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:28.747 [2024-07-25 13:28:39.198490] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:28.747 [2024-07-25 13:28:39.198536] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:28.747 [2024-07-25 13:28:39.198574] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:28.747 [2024-07-25 13:28:39.198589] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22521f0 name raid_bdev1, state offline 00:28:28.747 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:28:28.747 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.005 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:28:29.005 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:28:29.005 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:28:29.005 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:28:29.005 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:29.264 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:29.264 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:28:29.264 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:28:29.264 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:28:29.264 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@534 -- # i=1 00:28:29.264 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:29.523 [2024-07-25 13:28:39.880234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:29.523 [2024-07-25 13:28:39.880275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:29.523 [2024-07-25 13:28:39.880290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2253eb0 00:28:29.523 [2024-07-25 13:28:39.880301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:29.523 [2024-07-25 13:28:39.881620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:29.523 [2024-07-25 13:28:39.881645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:29.523 [2024-07-25 13:28:39.881688] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:29.523 [2024-07-25 13:28:39.881710] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:29.523 [2024-07-25 13:28:39.881772] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x224ec20 00:28:29.523 [2024-07-25 13:28:39.881781] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:29.523 [2024-07-25 13:28:39.881833] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2258870 00:28:29.523 [2024-07-25 13:28:39.881899] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x224ec20 00:28:29.523 [2024-07-25 13:28:39.881908] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x224ec20 00:28:29.523 [2024-07-25 13:28:39.881958] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:29.523 pt2 00:28:29.523 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:29.523 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:29.523 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:29.523 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:29.523 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:29.523 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:29.523 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:29.523 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:29.523 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:29.523 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:29.523 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.523 13:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.782 13:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:29.782 "name": "raid_bdev1", 00:28:29.782 "uuid": "27f32032-3448-46dd-94ad-64eb66138727", 00:28:29.782 "strip_size_kb": 0, 00:28:29.782 "state": "online", 00:28:29.782 "raid_level": "raid1", 00:28:29.782 "superblock": true, 00:28:29.782 "num_base_bdevs": 2, 00:28:29.782 "num_base_bdevs_discovered": 1, 00:28:29.782 "num_base_bdevs_operational": 1, 00:28:29.782 "base_bdevs_list": [ 00:28:29.782 { 00:28:29.782 "name": null, 00:28:29.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.782 "is_configured": false, 00:28:29.782 "data_offset": 256, 00:28:29.782 "data_size": 7936 00:28:29.782 }, 00:28:29.782 { 00:28:29.782 "name": "pt2", 00:28:29.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:29.782 "is_configured": true, 00:28:29.782 "data_offset": 256, 00:28:29.782 "data_size": 7936 00:28:29.782 } 00:28:29.782 ] 00:28:29.782 }' 00:28:29.782 13:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:29.782 13:28:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:30.349 13:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:30.608 [2024-07-25 13:28:40.906920] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:30.608 [2024-07-25 13:28:40.906942] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:30.608 [2024-07-25 13:28:40.906990] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:30.608 [2024-07-25 13:28:40.907027] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:30.608 [2024-07-25 13:28:40.907037] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x224ec20 name raid_bdev1, state offline 00:28:30.608 13:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.608 13:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:28:30.867 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:28:30.867 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:28:30.867 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:28:30.867 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:30.867 [2024-07-25 13:28:41.352069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:30.867 [2024-07-25 13:28:41.352105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.867 [2024-07-25 13:28:41.352119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x224f4d0 00:28:30.867 [2024-07-25 13:28:41.352131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.867 [2024-07-25 13:28:41.353452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.867 [2024-07-25 13:28:41.353476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:30.867 [2024-07-25 13:28:41.353518] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:30.868 [2024-07-25 13:28:41.353546] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:30.868 [2024-07-25 13:28:41.353618] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:30.868 [2024-07-25 13:28:41.353630] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:30.868 [2024-07-25 13:28:41.353642] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2252510 name raid_bdev1, state configuring 00:28:30.868 [2024-07-25 13:28:41.353662] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:30.868 [2024-07-25 13:28:41.353707] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x224edc0 00:28:30.868 [2024-07-25 13:28:41.353717] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:30.868 [2024-07-25 13:28:41.353768] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2257460 00:28:30.868 [2024-07-25 13:28:41.353833] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x224edc0 00:28:30.868 [2024-07-25 13:28:41.353842] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x224edc0 00:28:30.868 [2024-07-25 13:28:41.353897] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:31.126 pt1 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:31.126 "name": "raid_bdev1", 00:28:31.126 "uuid": "27f32032-3448-46dd-94ad-64eb66138727", 00:28:31.126 "strip_size_kb": 0, 00:28:31.126 "state": "online", 00:28:31.126 "raid_level": "raid1", 00:28:31.126 "superblock": true, 00:28:31.126 "num_base_bdevs": 2, 00:28:31.126 "num_base_bdevs_discovered": 1, 00:28:31.126 "num_base_bdevs_operational": 1, 00:28:31.126 "base_bdevs_list": [ 00:28:31.126 { 00:28:31.126 "name": null, 00:28:31.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.126 "is_configured": false, 00:28:31.126 "data_offset": 256, 00:28:31.126 "data_size": 7936 00:28:31.126 }, 00:28:31.126 { 00:28:31.126 "name": "pt2", 00:28:31.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:31.126 "is_configured": true, 00:28:31.126 "data_offset": 256, 00:28:31.126 "data_size": 7936 00:28:31.126 } 00:28:31.126 ] 00:28:31.126 }' 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:31.126 13:28:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:31.694 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:28:31.694 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:31.954 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:28:31.954 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:31.954 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:28:32.213 [2024-07-25 13:28:42.611596] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:32.213 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # '[' 27f32032-3448-46dd-94ad-64eb66138727 '!=' 27f32032-3448-46dd-94ad-64eb66138727 ']' 00:28:32.213 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@578 -- # killprocess 1017151 00:28:32.213 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 1017151 ']' 00:28:32.213 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 1017151 00:28:32.213 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:28:32.213 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:32.213 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1017151 00:28:32.213 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:32.213 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:32.213 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1017151' 00:28:32.213 killing process with pid 1017151 00:28:32.213 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 1017151 00:28:32.213 [2024-07-25 13:28:42.687376] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:32.213 [2024-07-25 13:28:42.687423] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:32.213 [2024-07-25 13:28:42.687460] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:32.213 [2024-07-25 13:28:42.687470] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x224edc0 name raid_bdev1, state offline 00:28:32.213 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 1017151 00:28:32.472 [2024-07-25 13:28:42.703133] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:32.472 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@580 -- # return 0 00:28:32.472 00:28:32.472 real 0m14.714s 00:28:32.472 user 0m26.677s 00:28:32.472 sys 0m2.756s 00:28:32.472 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:32.472 13:28:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:32.472 ************************************ 00:28:32.472 END TEST raid_superblock_test_md_interleaved 00:28:32.472 ************************************ 00:28:32.472 13:28:42 bdev_raid -- bdev/bdev_raid.sh@994 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:28:32.472 13:28:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:32.472 13:28:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:32.472 13:28:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:32.731 ************************************ 00:28:32.731 START TEST raid_rebuild_test_sb_md_interleaved 00:28:32.731 ************************************ 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # local verify=false 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # local strip_size 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # local create_arg 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # local data_offset 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # raid_pid=1019848 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # waitforlisten 1019848 /var/tmp/spdk-raid.sock 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 1019848 ']' 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:32.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:32.731 13:28:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:32.731 [2024-07-25 13:28:43.040231] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:28:32.731 [2024-07-25 13:28:43.040286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019848 ] 00:28:32.731 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:32.731 Zero copy mechanism will not be used. 00:28:32.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.731 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:32.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.731 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:32.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.731 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:32.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.731 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:32.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.731 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:32.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.731 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:32.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.731 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:32.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.731 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:32.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:32.732 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:32.732 [2024-07-25 13:28:43.170852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.991 [2024-07-25 13:28:43.257462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.991 [2024-07-25 13:28:43.315840] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:32.991 [2024-07-25 13:28:43.315876] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:33.559 13:28:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:33.559 13:28:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:28:33.559 13:28:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:33.559 13:28:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:28:33.817 BaseBdev1_malloc 00:28:33.817 13:28:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:34.076 [2024-07-25 13:28:44.369044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:34.076 [2024-07-25 13:28:44.369084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:34.076 [2024-07-25 13:28:44.369103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20f3650 00:28:34.076 [2024-07-25 13:28:44.369114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:34.076 [2024-07-25 13:28:44.370470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:34.076 [2024-07-25 13:28:44.370495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:34.076 BaseBdev1 00:28:34.076 13:28:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:34.076 13:28:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:28:34.334 BaseBdev2_malloc 00:28:34.334 13:28:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:34.593 [2024-07-25 13:28:44.826936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:34.593 [2024-07-25 13:28:44.826976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:34.593 [2024-07-25 13:28:44.826994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f4f830 00:28:34.593 [2024-07-25 13:28:44.827005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:34.593 [2024-07-25 13:28:44.828195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:34.593 [2024-07-25 13:28:44.828220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:34.593 BaseBdev2 00:28:34.593 13:28:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:28:34.593 spare_malloc 00:28:34.593 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:34.852 spare_delay 00:28:34.852 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:35.111 [2024-07-25 13:28:45.485069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:35.111 [2024-07-25 13:28:45.485109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:35.111 [2024-07-25 13:28:45.485127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f52540 00:28:35.111 [2024-07-25 13:28:45.485145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:35.111 [2024-07-25 13:28:45.486344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:35.111 [2024-07-25 13:28:45.486370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:35.111 spare 00:28:35.111 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:35.370 [2024-07-25 13:28:45.709681] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:35.370 [2024-07-25 13:28:45.710827] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:35.370 [2024-07-25 13:28:45.710968] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f52e70 00:28:35.370 [2024-07-25 13:28:45.710979] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:35.370 [2024-07-25 13:28:45.711044] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f56b60 00:28:35.370 [2024-07-25 13:28:45.711115] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f52e70 00:28:35.370 [2024-07-25 13:28:45.711124] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f52e70 00:28:35.370 [2024-07-25 13:28:45.711195] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:35.370 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:35.370 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:35.370 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:35.370 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:35.370 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:35.370 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:35.370 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.370 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.370 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.370 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.370 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.370 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.629 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:35.629 "name": "raid_bdev1", 00:28:35.629 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:35.629 "strip_size_kb": 0, 00:28:35.629 "state": "online", 00:28:35.629 "raid_level": "raid1", 00:28:35.629 "superblock": true, 00:28:35.629 "num_base_bdevs": 2, 00:28:35.629 "num_base_bdevs_discovered": 2, 00:28:35.629 "num_base_bdevs_operational": 2, 00:28:35.629 "base_bdevs_list": [ 00:28:35.629 { 00:28:35.629 "name": "BaseBdev1", 00:28:35.629 "uuid": "f683bd17-1d32-5184-9e12-46d55e99180e", 00:28:35.629 "is_configured": true, 00:28:35.629 "data_offset": 256, 00:28:35.629 "data_size": 7936 00:28:35.629 }, 00:28:35.629 { 00:28:35.629 "name": "BaseBdev2", 00:28:35.629 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:35.629 "is_configured": true, 00:28:35.629 "data_offset": 256, 00:28:35.629 "data_size": 7936 00:28:35.629 } 00:28:35.629 ] 00:28:35.629 }' 00:28:35.629 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:35.629 13:28:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:36.195 13:28:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:36.195 13:28:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:28:36.195 [2024-07-25 13:28:46.660399] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:36.195 13:28:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:28:36.454 13:28:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.454 13:28:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:36.454 13:28:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:28:36.454 13:28:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:28:36.454 13:28:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # '[' false = true ']' 00:28:36.454 13:28:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:36.713 [2024-07-25 13:28:47.121375] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:36.713 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:36.713 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:36.713 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:36.713 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:36.713 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:36.713 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:36.713 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:36.713 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:36.713 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:36.713 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:36.713 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.713 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.972 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:36.972 "name": "raid_bdev1", 00:28:36.972 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:36.972 "strip_size_kb": 0, 00:28:36.972 "state": "online", 00:28:36.972 "raid_level": "raid1", 00:28:36.972 "superblock": true, 00:28:36.972 "num_base_bdevs": 2, 00:28:36.972 "num_base_bdevs_discovered": 1, 00:28:36.972 "num_base_bdevs_operational": 1, 00:28:36.972 "base_bdevs_list": [ 00:28:36.972 { 00:28:36.972 "name": null, 00:28:36.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.972 "is_configured": false, 00:28:36.972 "data_offset": 256, 00:28:36.972 "data_size": 7936 00:28:36.972 }, 00:28:36.972 { 00:28:36.972 "name": "BaseBdev2", 00:28:36.972 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:36.972 "is_configured": true, 00:28:36.972 "data_offset": 256, 00:28:36.972 "data_size": 7936 00:28:36.972 } 00:28:36.972 ] 00:28:36.972 }' 00:28:36.972 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:36.972 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:37.580 13:28:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:37.839 [2024-07-25 13:28:48.087946] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:37.839 [2024-07-25 13:28:48.091403] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f566d0 00:28:37.840 [2024-07-25 13:28:48.093513] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:37.840 13:28:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:38.778 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:38.778 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:38.778 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:38.778 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:38.778 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.778 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.778 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.036 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:39.036 "name": "raid_bdev1", 00:28:39.036 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:39.036 "strip_size_kb": 0, 00:28:39.036 "state": "online", 00:28:39.036 "raid_level": "raid1", 00:28:39.036 "superblock": true, 00:28:39.036 "num_base_bdevs": 2, 00:28:39.036 "num_base_bdevs_discovered": 2, 00:28:39.036 "num_base_bdevs_operational": 2, 00:28:39.036 "process": { 00:28:39.036 "type": "rebuild", 00:28:39.036 "target": "spare", 00:28:39.036 "progress": { 00:28:39.036 "blocks": 2816, 00:28:39.036 "percent": 35 00:28:39.036 } 00:28:39.036 }, 00:28:39.036 "base_bdevs_list": [ 00:28:39.036 { 00:28:39.036 "name": "spare", 00:28:39.036 "uuid": "5dd8b806-f2d2-51a9-a77e-bb099c1b1084", 00:28:39.036 "is_configured": true, 00:28:39.036 "data_offset": 256, 00:28:39.036 "data_size": 7936 00:28:39.036 }, 00:28:39.036 { 00:28:39.036 "name": "BaseBdev2", 00:28:39.036 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:39.036 "is_configured": true, 00:28:39.036 "data_offset": 256, 00:28:39.036 "data_size": 7936 00:28:39.036 } 00:28:39.036 ] 00:28:39.036 }' 00:28:39.036 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:39.036 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:39.036 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:39.036 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:39.036 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:39.295 [2024-07-25 13:28:49.586830] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:39.295 [2024-07-25 13:28:49.604566] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:39.295 [2024-07-25 13:28:49.604608] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:39.295 [2024-07-25 13:28:49.604622] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:39.295 [2024-07-25 13:28:49.604630] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:39.295 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:39.295 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:39.295 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:39.295 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:39.295 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:39.295 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:39.295 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:39.295 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:39.295 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:39.295 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:39.295 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.295 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.554 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:39.554 "name": "raid_bdev1", 00:28:39.554 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:39.554 "strip_size_kb": 0, 00:28:39.554 "state": "online", 00:28:39.554 "raid_level": "raid1", 00:28:39.554 "superblock": true, 00:28:39.554 "num_base_bdevs": 2, 00:28:39.554 "num_base_bdevs_discovered": 1, 00:28:39.554 "num_base_bdevs_operational": 1, 00:28:39.554 "base_bdevs_list": [ 00:28:39.554 { 00:28:39.554 "name": null, 00:28:39.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.554 "is_configured": false, 00:28:39.554 "data_offset": 256, 00:28:39.554 "data_size": 7936 00:28:39.554 }, 00:28:39.554 { 00:28:39.554 "name": "BaseBdev2", 00:28:39.554 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:39.554 "is_configured": true, 00:28:39.554 "data_offset": 256, 00:28:39.554 "data_size": 7936 00:28:39.554 } 00:28:39.554 ] 00:28:39.554 }' 00:28:39.554 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:39.554 13:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:40.121 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:40.121 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:40.121 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:40.121 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:40.121 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:40.121 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.121 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.380 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:40.380 "name": "raid_bdev1", 00:28:40.380 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:40.380 "strip_size_kb": 0, 00:28:40.380 "state": "online", 00:28:40.380 "raid_level": "raid1", 00:28:40.380 "superblock": true, 00:28:40.380 "num_base_bdevs": 2, 00:28:40.380 "num_base_bdevs_discovered": 1, 00:28:40.380 "num_base_bdevs_operational": 1, 00:28:40.380 "base_bdevs_list": [ 00:28:40.380 { 00:28:40.380 "name": null, 00:28:40.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.380 "is_configured": false, 00:28:40.380 "data_offset": 256, 00:28:40.380 "data_size": 7936 00:28:40.380 }, 00:28:40.380 { 00:28:40.380 "name": "BaseBdev2", 00:28:40.380 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:40.380 "is_configured": true, 00:28:40.380 "data_offset": 256, 00:28:40.380 "data_size": 7936 00:28:40.380 } 00:28:40.380 ] 00:28:40.380 }' 00:28:40.380 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:40.380 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:40.380 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:40.380 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:40.380 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:40.639 [2024-07-25 13:28:50.959739] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:40.639 [2024-07-25 13:28:50.963203] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f54ab0 00:28:40.639 [2024-07-25 13:28:50.964563] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:40.639 13:28:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@678 -- # sleep 1 00:28:41.573 13:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:41.573 13:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:41.573 13:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:41.573 13:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:41.573 13:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:41.573 13:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.573 13:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.832 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:41.832 "name": "raid_bdev1", 00:28:41.832 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:41.832 "strip_size_kb": 0, 00:28:41.832 "state": "online", 00:28:41.832 "raid_level": "raid1", 00:28:41.832 "superblock": true, 00:28:41.832 "num_base_bdevs": 2, 00:28:41.832 "num_base_bdevs_discovered": 2, 00:28:41.832 "num_base_bdevs_operational": 2, 00:28:41.832 "process": { 00:28:41.832 "type": "rebuild", 00:28:41.832 "target": "spare", 00:28:41.832 "progress": { 00:28:41.832 "blocks": 3072, 00:28:41.832 "percent": 38 00:28:41.832 } 00:28:41.832 }, 00:28:41.832 "base_bdevs_list": [ 00:28:41.832 { 00:28:41.832 "name": "spare", 00:28:41.832 "uuid": "5dd8b806-f2d2-51a9-a77e-bb099c1b1084", 00:28:41.832 "is_configured": true, 00:28:41.832 "data_offset": 256, 00:28:41.832 "data_size": 7936 00:28:41.832 }, 00:28:41.832 { 00:28:41.832 "name": "BaseBdev2", 00:28:41.832 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:41.832 "is_configured": true, 00:28:41.832 "data_offset": 256, 00:28:41.832 "data_size": 7936 00:28:41.832 } 00:28:41.832 ] 00:28:41.832 }' 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:28:41.833 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # local timeout=1087 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.833 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.091 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:42.091 "name": "raid_bdev1", 00:28:42.091 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:42.091 "strip_size_kb": 0, 00:28:42.091 "state": "online", 00:28:42.091 "raid_level": "raid1", 00:28:42.091 "superblock": true, 00:28:42.091 "num_base_bdevs": 2, 00:28:42.091 "num_base_bdevs_discovered": 2, 00:28:42.091 "num_base_bdevs_operational": 2, 00:28:42.091 "process": { 00:28:42.091 "type": "rebuild", 00:28:42.091 "target": "spare", 00:28:42.091 "progress": { 00:28:42.091 "blocks": 3840, 00:28:42.091 "percent": 48 00:28:42.091 } 00:28:42.091 }, 00:28:42.091 "base_bdevs_list": [ 00:28:42.091 { 00:28:42.091 "name": "spare", 00:28:42.091 "uuid": "5dd8b806-f2d2-51a9-a77e-bb099c1b1084", 00:28:42.091 "is_configured": true, 00:28:42.091 "data_offset": 256, 00:28:42.091 "data_size": 7936 00:28:42.091 }, 00:28:42.091 { 00:28:42.091 "name": "BaseBdev2", 00:28:42.091 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:42.091 "is_configured": true, 00:28:42.091 "data_offset": 256, 00:28:42.091 "data_size": 7936 00:28:42.091 } 00:28:42.091 ] 00:28:42.091 }' 00:28:42.091 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:42.091 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:42.091 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:42.350 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:42.350 13:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:43.287 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:43.287 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:43.287 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:43.287 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:43.287 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:43.287 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:43.287 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.287 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.546 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:43.546 "name": "raid_bdev1", 00:28:43.546 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:43.546 "strip_size_kb": 0, 00:28:43.546 "state": "online", 00:28:43.546 "raid_level": "raid1", 00:28:43.546 "superblock": true, 00:28:43.546 "num_base_bdevs": 2, 00:28:43.546 "num_base_bdevs_discovered": 2, 00:28:43.546 "num_base_bdevs_operational": 2, 00:28:43.546 "process": { 00:28:43.546 "type": "rebuild", 00:28:43.546 "target": "spare", 00:28:43.546 "progress": { 00:28:43.546 "blocks": 7168, 00:28:43.546 "percent": 90 00:28:43.546 } 00:28:43.546 }, 00:28:43.546 "base_bdevs_list": [ 00:28:43.546 { 00:28:43.546 "name": "spare", 00:28:43.546 "uuid": "5dd8b806-f2d2-51a9-a77e-bb099c1b1084", 00:28:43.546 "is_configured": true, 00:28:43.546 "data_offset": 256, 00:28:43.546 "data_size": 7936 00:28:43.546 }, 00:28:43.546 { 00:28:43.546 "name": "BaseBdev2", 00:28:43.546 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:43.546 "is_configured": true, 00:28:43.546 "data_offset": 256, 00:28:43.546 "data_size": 7936 00:28:43.546 } 00:28:43.546 ] 00:28:43.546 }' 00:28:43.546 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:43.546 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:43.546 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:43.546 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:43.546 13:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:43.804 [2024-07-25 13:28:54.087010] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:43.804 [2024-07-25 13:28:54.087062] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:43.804 [2024-07-25 13:28:54.087135] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:44.741 13:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:44.741 13:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:44.741 13:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:44.741 13:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:44.742 13:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:44.742 13:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:44.742 13:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.742 13:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.742 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:44.742 "name": "raid_bdev1", 00:28:44.742 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:44.742 "strip_size_kb": 0, 00:28:44.742 "state": "online", 00:28:44.742 "raid_level": "raid1", 00:28:44.742 "superblock": true, 00:28:44.742 "num_base_bdevs": 2, 00:28:44.742 "num_base_bdevs_discovered": 2, 00:28:44.742 "num_base_bdevs_operational": 2, 00:28:44.742 "base_bdevs_list": [ 00:28:44.742 { 00:28:44.742 "name": "spare", 00:28:44.742 "uuid": "5dd8b806-f2d2-51a9-a77e-bb099c1b1084", 00:28:44.742 "is_configured": true, 00:28:44.742 "data_offset": 256, 00:28:44.742 "data_size": 7936 00:28:44.742 }, 00:28:44.742 { 00:28:44.742 "name": "BaseBdev2", 00:28:44.742 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:44.742 "is_configured": true, 00:28:44.742 "data_offset": 256, 00:28:44.742 "data_size": 7936 00:28:44.742 } 00:28:44.742 ] 00:28:44.742 }' 00:28:44.742 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:44.742 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:44.742 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:45.000 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:45.000 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # break 00:28:45.000 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:45.000 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:45.000 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:45.000 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:45.000 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:45.000 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.000 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.000 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:45.000 "name": "raid_bdev1", 00:28:45.000 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:45.000 "strip_size_kb": 0, 00:28:45.000 "state": "online", 00:28:45.000 "raid_level": "raid1", 00:28:45.000 "superblock": true, 00:28:45.000 "num_base_bdevs": 2, 00:28:45.000 "num_base_bdevs_discovered": 2, 00:28:45.000 "num_base_bdevs_operational": 2, 00:28:45.000 "base_bdevs_list": [ 00:28:45.000 { 00:28:45.000 "name": "spare", 00:28:45.000 "uuid": "5dd8b806-f2d2-51a9-a77e-bb099c1b1084", 00:28:45.000 "is_configured": true, 00:28:45.000 "data_offset": 256, 00:28:45.001 "data_size": 7936 00:28:45.001 }, 00:28:45.001 { 00:28:45.001 "name": "BaseBdev2", 00:28:45.001 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:45.001 "is_configured": true, 00:28:45.001 "data_offset": 256, 00:28:45.001 "data_size": 7936 00:28:45.001 } 00:28:45.001 ] 00:28:45.001 }' 00:28:45.001 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.260 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.519 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:45.519 "name": "raid_bdev1", 00:28:45.519 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:45.519 "strip_size_kb": 0, 00:28:45.519 "state": "online", 00:28:45.519 "raid_level": "raid1", 00:28:45.519 "superblock": true, 00:28:45.519 "num_base_bdevs": 2, 00:28:45.519 "num_base_bdevs_discovered": 2, 00:28:45.519 "num_base_bdevs_operational": 2, 00:28:45.519 "base_bdevs_list": [ 00:28:45.519 { 00:28:45.519 "name": "spare", 00:28:45.519 "uuid": "5dd8b806-f2d2-51a9-a77e-bb099c1b1084", 00:28:45.519 "is_configured": true, 00:28:45.519 "data_offset": 256, 00:28:45.519 "data_size": 7936 00:28:45.519 }, 00:28:45.519 { 00:28:45.519 "name": "BaseBdev2", 00:28:45.519 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:45.519 "is_configured": true, 00:28:45.519 "data_offset": 256, 00:28:45.519 "data_size": 7936 00:28:45.519 } 00:28:45.519 ] 00:28:45.519 }' 00:28:45.519 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:45.519 13:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:46.086 13:28:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:46.086 [2024-07-25 13:28:56.541370] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:46.086 [2024-07-25 13:28:56.541395] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:46.086 [2024-07-25 13:28:56.541444] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:46.086 [2024-07-25 13:28:56.541496] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:46.086 [2024-07-25 13:28:56.541507] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f52e70 name raid_bdev1, state offline 00:28:46.086 13:28:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.086 13:28:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # jq length 00:28:46.345 13:28:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:28:46.345 13:28:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@737 -- # '[' false = true ']' 00:28:46.345 13:28:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:28:46.345 13:28:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:46.604 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:46.861 [2024-07-25 13:28:57.223125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:46.861 [2024-07-25 13:28:57.223169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:46.861 [2024-07-25 13:28:57.223187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f564d0 00:28:46.861 [2024-07-25 13:28:57.223198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:46.861 [2024-07-25 13:28:57.224801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:46.861 [2024-07-25 13:28:57.224829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:46.861 [2024-07-25 13:28:57.224880] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:46.861 [2024-07-25 13:28:57.224905] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:46.861 [2024-07-25 13:28:57.224981] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:46.861 spare 00:28:46.861 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:46.861 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:46.861 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:46.861 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:46.861 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:46.861 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:46.861 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:46.862 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:46.862 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:46.862 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:46.862 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.862 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.862 [2024-07-25 13:28:57.325281] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f52d90 00:28:46.862 [2024-07-25 13:28:57.325295] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:46.862 [2024-07-25 13:28:57.325357] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f4eee0 00:28:46.862 [2024-07-25 13:28:57.325437] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f52d90 00:28:46.862 [2024-07-25 13:28:57.325445] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f52d90 00:28:46.862 [2024-07-25 13:28:57.325508] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:47.119 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:47.119 "name": "raid_bdev1", 00:28:47.119 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:47.119 "strip_size_kb": 0, 00:28:47.119 "state": "online", 00:28:47.119 "raid_level": "raid1", 00:28:47.119 "superblock": true, 00:28:47.119 "num_base_bdevs": 2, 00:28:47.119 "num_base_bdevs_discovered": 2, 00:28:47.119 "num_base_bdevs_operational": 2, 00:28:47.119 "base_bdevs_list": [ 00:28:47.119 { 00:28:47.119 "name": "spare", 00:28:47.119 "uuid": "5dd8b806-f2d2-51a9-a77e-bb099c1b1084", 00:28:47.119 "is_configured": true, 00:28:47.119 "data_offset": 256, 00:28:47.119 "data_size": 7936 00:28:47.119 }, 00:28:47.119 { 00:28:47.119 "name": "BaseBdev2", 00:28:47.119 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:47.119 "is_configured": true, 00:28:47.119 "data_offset": 256, 00:28:47.119 "data_size": 7936 00:28:47.119 } 00:28:47.120 ] 00:28:47.120 }' 00:28:47.120 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:47.120 13:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:47.686 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:47.686 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:47.686 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:47.686 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:47.686 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:47.686 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.686 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.946 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:47.946 "name": "raid_bdev1", 00:28:47.946 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:47.946 "strip_size_kb": 0, 00:28:47.946 "state": "online", 00:28:47.946 "raid_level": "raid1", 00:28:47.946 "superblock": true, 00:28:47.946 "num_base_bdevs": 2, 00:28:47.946 "num_base_bdevs_discovered": 2, 00:28:47.946 "num_base_bdevs_operational": 2, 00:28:47.946 "base_bdevs_list": [ 00:28:47.946 { 00:28:47.946 "name": "spare", 00:28:47.946 "uuid": "5dd8b806-f2d2-51a9-a77e-bb099c1b1084", 00:28:47.946 "is_configured": true, 00:28:47.946 "data_offset": 256, 00:28:47.946 "data_size": 7936 00:28:47.946 }, 00:28:47.946 { 00:28:47.946 "name": "BaseBdev2", 00:28:47.946 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:47.946 "is_configured": true, 00:28:47.946 "data_offset": 256, 00:28:47.946 "data_size": 7936 00:28:47.946 } 00:28:47.946 ] 00:28:47.946 }' 00:28:47.946 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:47.946 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:47.946 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:47.946 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:47.946 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.946 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:48.205 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:28:48.205 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:48.464 [2024-07-25 13:28:58.827458] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:48.464 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:48.464 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:48.464 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:48.464 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:48.464 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:48.464 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:48.464 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:48.464 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:48.464 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:48.464 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:48.464 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.464 13:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.723 13:28:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:48.723 "name": "raid_bdev1", 00:28:48.723 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:48.723 "strip_size_kb": 0, 00:28:48.723 "state": "online", 00:28:48.723 "raid_level": "raid1", 00:28:48.723 "superblock": true, 00:28:48.723 "num_base_bdevs": 2, 00:28:48.723 "num_base_bdevs_discovered": 1, 00:28:48.723 "num_base_bdevs_operational": 1, 00:28:48.723 "base_bdevs_list": [ 00:28:48.723 { 00:28:48.723 "name": null, 00:28:48.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.723 "is_configured": false, 00:28:48.723 "data_offset": 256, 00:28:48.723 "data_size": 7936 00:28:48.723 }, 00:28:48.723 { 00:28:48.723 "name": "BaseBdev2", 00:28:48.723 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:48.723 "is_configured": true, 00:28:48.723 "data_offset": 256, 00:28:48.723 "data_size": 7936 00:28:48.723 } 00:28:48.723 ] 00:28:48.723 }' 00:28:48.723 13:28:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:48.723 13:28:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:49.290 13:28:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:49.549 [2024-07-25 13:28:59.781998] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:49.549 [2024-07-25 13:28:59.782126] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:49.549 [2024-07-25 13:28:59.782147] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:49.549 [2024-07-25 13:28:59.782173] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:49.549 [2024-07-25 13:28:59.785474] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f55900 00:28:49.549 [2024-07-25 13:28:59.787582] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:49.549 13:28:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # sleep 1 00:28:50.485 13:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:50.485 13:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:50.485 13:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:50.485 13:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:50.485 13:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:50.486 13:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.486 13:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.744 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:50.744 "name": "raid_bdev1", 00:28:50.744 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:50.744 "strip_size_kb": 0, 00:28:50.744 "state": "online", 00:28:50.744 "raid_level": "raid1", 00:28:50.744 "superblock": true, 00:28:50.744 "num_base_bdevs": 2, 00:28:50.744 "num_base_bdevs_discovered": 2, 00:28:50.744 "num_base_bdevs_operational": 2, 00:28:50.744 "process": { 00:28:50.744 "type": "rebuild", 00:28:50.744 "target": "spare", 00:28:50.744 "progress": { 00:28:50.744 "blocks": 3072, 00:28:50.744 "percent": 38 00:28:50.744 } 00:28:50.744 }, 00:28:50.744 "base_bdevs_list": [ 00:28:50.744 { 00:28:50.744 "name": "spare", 00:28:50.744 "uuid": "5dd8b806-f2d2-51a9-a77e-bb099c1b1084", 00:28:50.744 "is_configured": true, 00:28:50.744 "data_offset": 256, 00:28:50.744 "data_size": 7936 00:28:50.744 }, 00:28:50.744 { 00:28:50.744 "name": "BaseBdev2", 00:28:50.744 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:50.744 "is_configured": true, 00:28:50.744 "data_offset": 256, 00:28:50.744 "data_size": 7936 00:28:50.744 } 00:28:50.744 ] 00:28:50.744 }' 00:28:50.744 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:50.744 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:50.744 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:50.744 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:50.744 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:51.003 [2024-07-25 13:29:01.340584] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:51.003 [2024-07-25 13:29:01.399260] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:51.003 [2024-07-25 13:29:01.399302] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:51.003 [2024-07-25 13:29:01.399316] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:51.003 [2024-07-25 13:29:01.399323] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:51.003 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:51.003 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:51.003 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:51.003 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:51.003 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:51.003 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:51.003 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:51.003 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:51.003 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:51.003 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:51.003 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.003 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.298 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:51.298 "name": "raid_bdev1", 00:28:51.298 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:51.298 "strip_size_kb": 0, 00:28:51.298 "state": "online", 00:28:51.298 "raid_level": "raid1", 00:28:51.298 "superblock": true, 00:28:51.298 "num_base_bdevs": 2, 00:28:51.298 "num_base_bdevs_discovered": 1, 00:28:51.298 "num_base_bdevs_operational": 1, 00:28:51.298 "base_bdevs_list": [ 00:28:51.298 { 00:28:51.298 "name": null, 00:28:51.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:51.298 "is_configured": false, 00:28:51.298 "data_offset": 256, 00:28:51.298 "data_size": 7936 00:28:51.298 }, 00:28:51.298 { 00:28:51.298 "name": "BaseBdev2", 00:28:51.298 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:51.298 "is_configured": true, 00:28:51.298 "data_offset": 256, 00:28:51.298 "data_size": 7936 00:28:51.298 } 00:28:51.298 ] 00:28:51.298 }' 00:28:51.298 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:51.298 13:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:51.866 13:29:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:52.125 [2024-07-25 13:29:02.445669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:52.125 [2024-07-25 13:29:02.445711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:52.125 [2024-07-25 13:29:02.445736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f56770 00:28:52.125 [2024-07-25 13:29:02.445748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:52.125 [2024-07-25 13:29:02.445913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:52.125 [2024-07-25 13:29:02.445928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:52.125 [2024-07-25 13:29:02.445976] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:52.125 [2024-07-25 13:29:02.445986] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:52.125 [2024-07-25 13:29:02.445996] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:52.125 [2024-07-25 13:29:02.446012] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:52.125 [2024-07-25 13:29:02.449333] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f4eee0 00:28:52.125 [2024-07-25 13:29:02.450712] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:52.125 spare 00:28:52.125 13:29:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # sleep 1 00:28:53.060 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:53.060 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:53.060 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:53.060 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:53.060 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.060 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.060 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.317 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:53.317 "name": "raid_bdev1", 00:28:53.317 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:53.317 "strip_size_kb": 0, 00:28:53.317 "state": "online", 00:28:53.317 "raid_level": "raid1", 00:28:53.317 "superblock": true, 00:28:53.317 "num_base_bdevs": 2, 00:28:53.317 "num_base_bdevs_discovered": 2, 00:28:53.317 "num_base_bdevs_operational": 2, 00:28:53.317 "process": { 00:28:53.317 "type": "rebuild", 00:28:53.317 "target": "spare", 00:28:53.317 "progress": { 00:28:53.317 "blocks": 3072, 00:28:53.317 "percent": 38 00:28:53.317 } 00:28:53.317 }, 00:28:53.317 "base_bdevs_list": [ 00:28:53.317 { 00:28:53.317 "name": "spare", 00:28:53.317 "uuid": "5dd8b806-f2d2-51a9-a77e-bb099c1b1084", 00:28:53.317 "is_configured": true, 00:28:53.317 "data_offset": 256, 00:28:53.317 "data_size": 7936 00:28:53.317 }, 00:28:53.317 { 00:28:53.317 "name": "BaseBdev2", 00:28:53.317 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:53.317 "is_configured": true, 00:28:53.317 "data_offset": 256, 00:28:53.317 "data_size": 7936 00:28:53.317 } 00:28:53.317 ] 00:28:53.317 }' 00:28:53.317 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:53.317 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:53.318 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.318 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:53.318 13:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:53.576 [2024-07-25 13:29:04.003668] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:53.576 [2024-07-25 13:29:04.062378] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:53.576 [2024-07-25 13:29:04.062416] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:53.576 [2024-07-25 13:29:04.062430] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:53.576 [2024-07-25 13:29:04.062443] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:53.835 "name": "raid_bdev1", 00:28:53.835 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:53.835 "strip_size_kb": 0, 00:28:53.835 "state": "online", 00:28:53.835 "raid_level": "raid1", 00:28:53.835 "superblock": true, 00:28:53.835 "num_base_bdevs": 2, 00:28:53.835 "num_base_bdevs_discovered": 1, 00:28:53.835 "num_base_bdevs_operational": 1, 00:28:53.835 "base_bdevs_list": [ 00:28:53.835 { 00:28:53.835 "name": null, 00:28:53.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.835 "is_configured": false, 00:28:53.835 "data_offset": 256, 00:28:53.835 "data_size": 7936 00:28:53.835 }, 00:28:53.835 { 00:28:53.835 "name": "BaseBdev2", 00:28:53.835 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:53.835 "is_configured": true, 00:28:53.835 "data_offset": 256, 00:28:53.835 "data_size": 7936 00:28:53.835 } 00:28:53.835 ] 00:28:53.835 }' 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:53.835 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:54.772 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:54.772 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:54.772 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:54.772 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:54.772 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:54.772 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.772 13:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.772 13:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:54.772 "name": "raid_bdev1", 00:28:54.772 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:54.772 "strip_size_kb": 0, 00:28:54.772 "state": "online", 00:28:54.772 "raid_level": "raid1", 00:28:54.772 "superblock": true, 00:28:54.772 "num_base_bdevs": 2, 00:28:54.772 "num_base_bdevs_discovered": 1, 00:28:54.772 "num_base_bdevs_operational": 1, 00:28:54.772 "base_bdevs_list": [ 00:28:54.772 { 00:28:54.772 "name": null, 00:28:54.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.772 "is_configured": false, 00:28:54.772 "data_offset": 256, 00:28:54.772 "data_size": 7936 00:28:54.772 }, 00:28:54.772 { 00:28:54.772 "name": "BaseBdev2", 00:28:54.772 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:54.772 "is_configured": true, 00:28:54.772 "data_offset": 256, 00:28:54.772 "data_size": 7936 00:28:54.772 } 00:28:54.772 ] 00:28:54.772 }' 00:28:54.772 13:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:54.772 13:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:54.772 13:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:54.772 13:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:54.772 13:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:55.031 13:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:55.290 [2024-07-25 13:29:05.650185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:55.290 [2024-07-25 13:29:05.650229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:55.290 [2024-07-25 13:29:05.650249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f4eac0 00:28:55.290 [2024-07-25 13:29:05.650260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:55.290 [2024-07-25 13:29:05.650406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:55.290 [2024-07-25 13:29:05.650422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:55.290 [2024-07-25 13:29:05.650462] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:55.290 [2024-07-25 13:29:05.650472] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:55.290 [2024-07-25 13:29:05.650482] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:55.290 BaseBdev1 00:28:55.290 13:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # sleep 1 00:28:56.226 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:56.226 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:56.226 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:56.226 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:56.226 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:56.226 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:56.226 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:56.226 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:56.226 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:56.226 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:56.226 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.226 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.485 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:56.485 "name": "raid_bdev1", 00:28:56.485 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:56.485 "strip_size_kb": 0, 00:28:56.485 "state": "online", 00:28:56.485 "raid_level": "raid1", 00:28:56.485 "superblock": true, 00:28:56.485 "num_base_bdevs": 2, 00:28:56.485 "num_base_bdevs_discovered": 1, 00:28:56.485 "num_base_bdevs_operational": 1, 00:28:56.485 "base_bdevs_list": [ 00:28:56.485 { 00:28:56.485 "name": null, 00:28:56.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.485 "is_configured": false, 00:28:56.485 "data_offset": 256, 00:28:56.485 "data_size": 7936 00:28:56.485 }, 00:28:56.485 { 00:28:56.485 "name": "BaseBdev2", 00:28:56.485 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:56.485 "is_configured": true, 00:28:56.485 "data_offset": 256, 00:28:56.485 "data_size": 7936 00:28:56.485 } 00:28:56.485 ] 00:28:56.485 }' 00:28:56.485 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:56.485 13:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:57.134 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:57.134 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:57.134 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:57.134 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:57.134 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:57.134 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.134 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:57.393 "name": "raid_bdev1", 00:28:57.393 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:57.393 "strip_size_kb": 0, 00:28:57.393 "state": "online", 00:28:57.393 "raid_level": "raid1", 00:28:57.393 "superblock": true, 00:28:57.393 "num_base_bdevs": 2, 00:28:57.393 "num_base_bdevs_discovered": 1, 00:28:57.393 "num_base_bdevs_operational": 1, 00:28:57.393 "base_bdevs_list": [ 00:28:57.393 { 00:28:57.393 "name": null, 00:28:57.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.393 "is_configured": false, 00:28:57.393 "data_offset": 256, 00:28:57.393 "data_size": 7936 00:28:57.393 }, 00:28:57.393 { 00:28:57.393 "name": "BaseBdev2", 00:28:57.393 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:57.393 "is_configured": true, 00:28:57.393 "data_offset": 256, 00:28:57.393 "data_size": 7936 00:28:57.393 } 00:28:57.393 ] 00:28:57.393 }' 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:57.393 13:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:57.652 [2024-07-25 13:29:08.012404] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:57.652 [2024-07-25 13:29:08.012512] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:57.652 [2024-07-25 13:29:08.012527] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:57.652 request: 00:28:57.652 { 00:28:57.652 "base_bdev": "BaseBdev1", 00:28:57.652 "raid_bdev": "raid_bdev1", 00:28:57.652 "method": "bdev_raid_add_base_bdev", 00:28:57.652 "req_id": 1 00:28:57.652 } 00:28:57.652 Got JSON-RPC error response 00:28:57.652 response: 00:28:57.652 { 00:28:57.652 "code": -22, 00:28:57.652 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:57.652 } 00:28:57.652 13:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:28:57.652 13:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:57.652 13:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:57.652 13:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:57.652 13:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@793 -- # sleep 1 00:28:58.590 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:58.590 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:58.590 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:58.590 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:58.590 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:58.590 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:58.590 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:58.590 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:58.590 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:58.590 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:58.590 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:58.590 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.849 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:58.849 "name": "raid_bdev1", 00:28:58.849 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:58.849 "strip_size_kb": 0, 00:28:58.849 "state": "online", 00:28:58.849 "raid_level": "raid1", 00:28:58.849 "superblock": true, 00:28:58.849 "num_base_bdevs": 2, 00:28:58.849 "num_base_bdevs_discovered": 1, 00:28:58.849 "num_base_bdevs_operational": 1, 00:28:58.849 "base_bdevs_list": [ 00:28:58.849 { 00:28:58.849 "name": null, 00:28:58.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.849 "is_configured": false, 00:28:58.849 "data_offset": 256, 00:28:58.849 "data_size": 7936 00:28:58.849 }, 00:28:58.849 { 00:28:58.849 "name": "BaseBdev2", 00:28:58.849 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:58.849 "is_configured": true, 00:28:58.849 "data_offset": 256, 00:28:58.849 "data_size": 7936 00:28:58.849 } 00:28:58.849 ] 00:28:58.849 }' 00:28:58.849 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:58.849 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:59.418 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:59.418 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:59.418 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:59.418 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:59.418 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:59.418 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.418 13:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.677 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:59.677 "name": "raid_bdev1", 00:28:59.677 "uuid": "75ccb502-ef93-4b00-ade8-b9dcc2e29865", 00:28:59.677 "strip_size_kb": 0, 00:28:59.677 "state": "online", 00:28:59.677 "raid_level": "raid1", 00:28:59.677 "superblock": true, 00:28:59.677 "num_base_bdevs": 2, 00:28:59.677 "num_base_bdevs_discovered": 1, 00:28:59.677 "num_base_bdevs_operational": 1, 00:28:59.677 "base_bdevs_list": [ 00:28:59.677 { 00:28:59.677 "name": null, 00:28:59.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.677 "is_configured": false, 00:28:59.677 "data_offset": 256, 00:28:59.677 "data_size": 7936 00:28:59.677 }, 00:28:59.677 { 00:28:59.677 "name": "BaseBdev2", 00:28:59.677 "uuid": "d7a9b121-3106-5271-9902-28720334278f", 00:28:59.677 "is_configured": true, 00:28:59.677 "data_offset": 256, 00:28:59.677 "data_size": 7936 00:28:59.677 } 00:28:59.677 ] 00:28:59.677 }' 00:28:59.677 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:59.677 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:59.677 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:59.937 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:59.937 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@798 -- # killprocess 1019848 00:28:59.937 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 1019848 ']' 00:28:59.937 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 1019848 00:28:59.937 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:28:59.937 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.937 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1019848 00:28:59.937 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:59.937 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:59.937 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1019848' 00:28:59.937 killing process with pid 1019848 00:28:59.937 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 1019848 00:28:59.937 Received shutdown signal, test time was about 60.000000 seconds 00:28:59.937 00:28:59.937 Latency(us) 00:28:59.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.937 =================================================================================================================== 00:28:59.937 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:59.937 [2024-07-25 13:29:10.226785] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:59.937 [2024-07-25 13:29:10.226861] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:59.937 [2024-07-25 13:29:10.226897] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:59.937 [2024-07-25 13:29:10.226915] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f52d90 name raid_bdev1, state offline 00:28:59.937 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 1019848 00:28:59.937 [2024-07-25 13:29:10.251399] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:00.196 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@800 -- # return 0 00:29:00.196 00:29:00.196 real 0m27.466s 00:29:00.196 user 0m43.338s 00:29:00.196 sys 0m3.665s 00:29:00.196 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:00.196 13:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:00.196 ************************************ 00:29:00.196 END TEST raid_rebuild_test_sb_md_interleaved 00:29:00.197 ************************************ 00:29:00.197 13:29:10 bdev_raid -- bdev/bdev_raid.sh@996 -- # trap - EXIT 00:29:00.197 13:29:10 bdev_raid -- bdev/bdev_raid.sh@997 -- # cleanup 00:29:00.197 13:29:10 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 1019848 ']' 00:29:00.197 13:29:10 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 1019848 00:29:00.197 13:29:10 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:29:00.197 00:29:00.197 real 17m55.336s 00:29:00.197 user 30m16.725s 00:29:00.197 sys 3m15.805s 00:29:00.197 13:29:10 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:00.197 13:29:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:00.197 ************************************ 00:29:00.197 END TEST bdev_raid 00:29:00.197 ************************************ 00:29:00.197 13:29:10 -- spdk/autotest.sh@195 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:29:00.197 13:29:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:00.197 13:29:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:00.197 13:29:10 -- common/autotest_common.sh@10 -- # set +x 00:29:00.197 ************************************ 00:29:00.197 START TEST bdevperf_config 00:29:00.197 ************************************ 00:29:00.197 13:29:10 bdevperf_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:29:00.456 * Looking for test storage... 00:29:00.456 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:00.456 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:00.456 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:29:00.456 13:29:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:00.457 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:00.457 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:00.457 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:00.457 13:29:10 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:02.993 13:29:13 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-25 13:29:10.824550] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:02.993 [2024-07-25 13:29:10.824609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024822 ] 00:29:02.993 Using job config with 4 jobs 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.993 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:02.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:02.994 [2024-07-25 13:29:10.968830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.994 [2024-07-25 13:29:11.073074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.994 cpumask for '\''job0'\'' is too big 00:29:02.994 cpumask for '\''job1'\'' is too big 00:29:02.994 cpumask for '\''job2'\'' is too big 00:29:02.994 cpumask for '\''job3'\'' is too big 00:29:02.994 Running I/O for 2 seconds... 00:29:02.994 00:29:02.994 Latency(us) 00:29:02.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.994 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:02.994 Malloc0 : 2.02 26004.93 25.40 0.00 0.00 9829.63 1703.94 14994.64 00:29:02.994 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:02.994 Malloc0 : 2.02 25982.65 25.37 0.00 0.00 9817.47 1677.72 13316.92 00:29:02.994 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:02.994 Malloc0 : 2.02 25960.65 25.35 0.00 0.00 9805.62 1677.72 11586.76 00:29:02.994 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:02.994 Malloc0 : 2.02 25938.63 25.33 0.00 0.00 9793.94 1677.72 10171.19 00:29:02.994 =================================================================================================================== 00:29:02.994 Total : 103886.87 101.45 0.00 0.00 9811.67 1677.72 14994.64' 00:29:02.994 13:29:13 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-25 13:29:10.824550] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:02.994 [2024-07-25 13:29:10.824609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024822 ] 00:29:02.994 Using job config with 4 jobs 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:02.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.994 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:02.994 [2024-07-25 13:29:10.968830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.994 [2024-07-25 13:29:11.073074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.994 cpumask for '\''job0'\'' is too big 00:29:02.994 cpumask for '\''job1'\'' is too big 00:29:02.994 cpumask for '\''job2'\'' is too big 00:29:02.994 cpumask for '\''job3'\'' is too big 00:29:02.994 Running I/O for 2 seconds... 00:29:02.994 00:29:02.994 Latency(us) 00:29:02.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.994 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:02.994 Malloc0 : 2.02 26004.93 25.40 0.00 0.00 9829.63 1703.94 14994.64 00:29:02.994 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:02.994 Malloc0 : 2.02 25982.65 25.37 0.00 0.00 9817.47 1677.72 13316.92 00:29:02.994 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:02.994 Malloc0 : 2.02 25960.65 25.35 0.00 0.00 9805.62 1677.72 11586.76 00:29:02.994 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:02.994 Malloc0 : 2.02 25938.63 25.33 0.00 0.00 9793.94 1677.72 10171.19 00:29:02.994 =================================================================================================================== 00:29:02.994 Total : 103886.87 101.45 0.00 0.00 9811.67 1677.72 14994.64' 00:29:02.994 13:29:13 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 13:29:10.824550] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:02.994 [2024-07-25 13:29:10.824609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024822 ] 00:29:02.995 Using job config with 4 jobs 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:02.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.995 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:02.995 [2024-07-25 13:29:10.968830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.995 [2024-07-25 13:29:11.073074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.995 cpumask for '\''job0'\'' is too big 00:29:02.995 cpumask for '\''job1'\'' is too big 00:29:02.995 cpumask for '\''job2'\'' is too big 00:29:02.995 cpumask for '\''job3'\'' is too big 00:29:02.995 Running I/O for 2 seconds... 00:29:02.995 00:29:02.995 Latency(us) 00:29:02.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.995 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:02.995 Malloc0 : 2.02 26004.93 25.40 0.00 0.00 9829.63 1703.94 14994.64 00:29:02.995 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:02.995 Malloc0 : 2.02 25982.65 25.37 0.00 0.00 9817.47 1677.72 13316.92 00:29:02.995 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:02.995 Malloc0 : 2.02 25960.65 25.35 0.00 0.00 9805.62 1677.72 11586.76 00:29:02.995 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:02.995 Malloc0 : 2.02 25938.63 25.33 0.00 0.00 9793.94 1677.72 10171.19 00:29:02.995 =================================================================================================================== 00:29:02.995 Total : 103886.87 101.45 0.00 0.00 9811.67 1677.72 14994.64' 00:29:02.995 13:29:13 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:02.995 13:29:13 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:02.995 13:29:13 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:29:02.995 13:29:13 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:03.254 [2024-07-25 13:29:13.528133] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:03.254 [2024-07-25 13:29:13.528209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025337 ] 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.254 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:03.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.255 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:03.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.255 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:03.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.255 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:03.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.255 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:03.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.255 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:03.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.255 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:03.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.255 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:03.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.255 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:03.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.255 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:03.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.255 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:03.255 [2024-07-25 13:29:13.674699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.512 [2024-07-25 13:29:13.782951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.512 cpumask for 'job0' is too big 00:29:03.512 cpumask for 'job1' is too big 00:29:03.512 cpumask for 'job2' is too big 00:29:03.512 cpumask for 'job3' is too big 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:29:06.099 Running I/O for 2 seconds... 00:29:06.099 00:29:06.099 Latency(us) 00:29:06.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.099 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:06.099 Malloc0 : 2.02 26128.96 25.52 0.00 0.00 9785.10 1703.94 14994.64 00:29:06.099 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:06.099 Malloc0 : 2.02 26106.90 25.50 0.00 0.00 9772.38 1703.94 13264.49 00:29:06.099 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:06.099 Malloc0 : 2.02 26084.87 25.47 0.00 0.00 9759.93 1690.83 11586.76 00:29:06.099 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:06.099 Malloc0 : 2.02 26062.91 25.45 0.00 0.00 9748.24 1690.83 10118.76 00:29:06.099 =================================================================================================================== 00:29:06.099 Total : 104383.65 101.94 0.00 0.00 9766.42 1690.83 14994.64' 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:06.099 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:06.099 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:06.099 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:06.099 13:29:16 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:08.728 13:29:18 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-25 13:29:16.239291] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:08.728 [2024-07-25 13:29:16.239356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025861 ] 00:29:08.728 Using job config with 3 jobs 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:08.728 [2024-07-25 13:29:16.381946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.728 [2024-07-25 13:29:16.476810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.728 cpumask for '\''job0'\'' is too big 00:29:08.728 cpumask for '\''job1'\'' is too big 00:29:08.728 cpumask for '\''job2'\'' is too big 00:29:08.728 Running I/O for 2 seconds... 00:29:08.728 00:29:08.728 Latency(us) 00:29:08.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.728 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:08.728 Malloc0 : 2.01 35306.83 34.48 0.00 0.00 7252.71 1677.72 10643.05 00:29:08.728 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:08.728 Malloc0 : 2.02 35318.53 34.49 0.00 0.00 7235.15 1664.61 8965.32 00:29:08.728 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:08.728 Malloc0 : 2.02 35288.49 34.46 0.00 0.00 7226.81 1658.06 7602.18 00:29:08.728 =================================================================================================================== 00:29:08.728 Total : 105913.85 103.43 0.00 0.00 7238.21 1658.06 10643.05' 00:29:08.728 13:29:18 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-25 13:29:16.239291] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:08.728 [2024-07-25 13:29:16.239356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025861 ] 00:29:08.728 Using job config with 3 jobs 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:08.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.728 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:08.729 [2024-07-25 13:29:16.381946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.729 [2024-07-25 13:29:16.476810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.729 cpumask for '\''job0'\'' is too big 00:29:08.729 cpumask for '\''job1'\'' is too big 00:29:08.729 cpumask for '\''job2'\'' is too big 00:29:08.729 Running I/O for 2 seconds... 00:29:08.729 00:29:08.729 Latency(us) 00:29:08.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.729 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:08.729 Malloc0 : 2.01 35306.83 34.48 0.00 0.00 7252.71 1677.72 10643.05 00:29:08.729 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:08.729 Malloc0 : 2.02 35318.53 34.49 0.00 0.00 7235.15 1664.61 8965.32 00:29:08.729 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:08.729 Malloc0 : 2.02 35288.49 34.46 0.00 0.00 7226.81 1658.06 7602.18 00:29:08.729 =================================================================================================================== 00:29:08.729 Total : 105913.85 103.43 0.00 0.00 7238.21 1658.06 10643.05' 00:29:08.729 13:29:18 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:08.729 13:29:18 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 13:29:16.239291] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:08.729 [2024-07-25 13:29:16.239356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025861 ] 00:29:08.729 Using job config with 3 jobs 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:08.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.729 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:08.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.730 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:08.730 [2024-07-25 13:29:16.381946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.730 [2024-07-25 13:29:16.476810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.730 cpumask for '\''job0'\'' is too big 00:29:08.730 cpumask for '\''job1'\'' is too big 00:29:08.730 cpumask for '\''job2'\'' is too big 00:29:08.730 Running I/O for 2 seconds... 00:29:08.730 00:29:08.730 Latency(us) 00:29:08.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.730 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:08.730 Malloc0 : 2.01 35306.83 34.48 0.00 0.00 7252.71 1677.72 10643.05 00:29:08.730 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:08.730 Malloc0 : 2.02 35318.53 34.49 0.00 0.00 7235.15 1664.61 8965.32 00:29:08.730 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:08.730 Malloc0 : 2.02 35288.49 34.46 0.00 0.00 7226.81 1658.06 7602.18 00:29:08.730 =================================================================================================================== 00:29:08.730 Total : 105913.85 103.43 0.00 0.00 7238.21 1658.06 10643.05' 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:08.730 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:08.730 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:08.730 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:08.730 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:29:08.730 13:29:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:29:08.731 13:29:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:29:08.731 13:29:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:29:08.731 00:29:08.731 13:29:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:29:08.731 13:29:18 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:11.267 13:29:21 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-25 13:29:18.951482] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:11.267 [2024-07-25 13:29:18.951531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026240 ] 00:29:11.267 Using job config with 4 jobs 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:11.267 [2024-07-25 13:29:19.090908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.267 [2024-07-25 13:29:19.197814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.267 cpumask for '\''job0'\'' is too big 00:29:11.267 cpumask for '\''job1'\'' is too big 00:29:11.267 cpumask for '\''job2'\'' is too big 00:29:11.267 cpumask for '\''job3'\'' is too big 00:29:11.267 Running I/O for 2 seconds... 00:29:11.267 00:29:11.267 Latency(us) 00:29:11.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.267 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.267 Malloc0 : 2.04 12955.67 12.65 0.00 0.00 19736.37 3460.30 30408.70 00:29:11.267 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.267 Malloc1 : 2.04 12944.50 12.64 0.00 0.00 19737.49 4246.73 30408.70 00:29:11.267 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.267 Malloc0 : 2.04 12933.65 12.63 0.00 0.00 19689.20 3434.09 26843.55 00:29:11.267 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.267 Malloc1 : 2.04 12922.49 12.62 0.00 0.00 19688.42 4220.52 26843.55 00:29:11.267 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.267 Malloc0 : 2.04 12911.68 12.61 0.00 0.00 19640.43 3460.30 23383.24 00:29:11.267 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.267 Malloc1 : 2.04 12900.67 12.60 0.00 0.00 19639.62 4246.73 23278.39 00:29:11.267 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.267 Malloc0 : 2.05 12889.79 12.59 0.00 0.00 19591.23 3434.09 20132.66 00:29:11.267 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.267 Malloc1 : 2.05 12878.79 12.58 0.00 0.00 19590.63 4220.52 20132.66 00:29:11.267 =================================================================================================================== 00:29:11.267 Total : 103337.25 100.92 0.00 0.00 19664.17 3434.09 30408.70' 00:29:11.267 13:29:21 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-25 13:29:18.951482] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:11.267 [2024-07-25 13:29:18.951531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026240 ] 00:29:11.267 Using job config with 4 jobs 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:11.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.267 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:11.268 [2024-07-25 13:29:19.090908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.268 [2024-07-25 13:29:19.197814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.268 cpumask for '\''job0'\'' is too big 00:29:11.268 cpumask for '\''job1'\'' is too big 00:29:11.268 cpumask for '\''job2'\'' is too big 00:29:11.268 cpumask for '\''job3'\'' is too big 00:29:11.268 Running I/O for 2 seconds... 00:29:11.268 00:29:11.268 Latency(us) 00:29:11.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.268 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.268 Malloc0 : 2.04 12955.67 12.65 0.00 0.00 19736.37 3460.30 30408.70 00:29:11.268 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.268 Malloc1 : 2.04 12944.50 12.64 0.00 0.00 19737.49 4246.73 30408.70 00:29:11.268 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.268 Malloc0 : 2.04 12933.65 12.63 0.00 0.00 19689.20 3434.09 26843.55 00:29:11.268 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.268 Malloc1 : 2.04 12922.49 12.62 0.00 0.00 19688.42 4220.52 26843.55 00:29:11.268 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.268 Malloc0 : 2.04 12911.68 12.61 0.00 0.00 19640.43 3460.30 23383.24 00:29:11.268 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.268 Malloc1 : 2.04 12900.67 12.60 0.00 0.00 19639.62 4246.73 23278.39 00:29:11.268 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.268 Malloc0 : 2.05 12889.79 12.59 0.00 0.00 19591.23 3434.09 20132.66 00:29:11.268 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.268 Malloc1 : 2.05 12878.79 12.58 0.00 0.00 19590.63 4220.52 20132.66 00:29:11.268 =================================================================================================================== 00:29:11.268 Total : 103337.25 100.92 0.00 0.00 19664.17 3434.09 30408.70' 00:29:11.268 13:29:21 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:11.268 13:29:21 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 13:29:18.951482] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:11.268 [2024-07-25 13:29:18.951531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026240 ] 00:29:11.268 Using job config with 4 jobs 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:11.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.268 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:11.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.269 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:11.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.269 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:11.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.269 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:11.269 [2024-07-25 13:29:19.090908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.269 [2024-07-25 13:29:19.197814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.269 cpumask for '\''job0'\'' is too big 00:29:11.269 cpumask for '\''job1'\'' is too big 00:29:11.269 cpumask for '\''job2'\'' is too big 00:29:11.269 cpumask for '\''job3'\'' is too big 00:29:11.269 Running I/O for 2 seconds... 00:29:11.269 00:29:11.269 Latency(us) 00:29:11.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.269 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.269 Malloc0 : 2.04 12955.67 12.65 0.00 0.00 19736.37 3460.30 30408.70 00:29:11.269 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.269 Malloc1 : 2.04 12944.50 12.64 0.00 0.00 19737.49 4246.73 30408.70 00:29:11.269 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.269 Malloc0 : 2.04 12933.65 12.63 0.00 0.00 19689.20 3434.09 26843.55 00:29:11.269 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.269 Malloc1 : 2.04 12922.49 12.62 0.00 0.00 19688.42 4220.52 26843.55 00:29:11.269 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.269 Malloc0 : 2.04 12911.68 12.61 0.00 0.00 19640.43 3460.30 23383.24 00:29:11.269 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.269 Malloc1 : 2.04 12900.67 12.60 0.00 0.00 19639.62 4246.73 23278.39 00:29:11.269 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.269 Malloc0 : 2.05 12889.79 12.59 0.00 0.00 19591.23 3434.09 20132.66 00:29:11.269 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:11.269 Malloc1 : 2.05 12878.79 12.58 0.00 0.00 19590.63 4220.52 20132.66 00:29:11.269 =================================================================================================================== 00:29:11.269 Total : 103337.25 100.92 0.00 0.00 19664.17 3434.09 30408.70' 00:29:11.269 13:29:21 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:11.269 13:29:21 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:29:11.269 13:29:21 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:29:11.269 13:29:21 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:29:11.269 13:29:21 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:29:11.269 00:29:11.269 real 0m11.017s 00:29:11.269 user 0m9.728s 00:29:11.269 sys 0m1.143s 00:29:11.269 13:29:21 bdevperf_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:11.269 13:29:21 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:29:11.269 ************************************ 00:29:11.269 END TEST bdevperf_config 00:29:11.269 ************************************ 00:29:11.269 13:29:21 -- spdk/autotest.sh@196 -- # uname -s 00:29:11.269 13:29:21 -- spdk/autotest.sh@196 -- # [[ Linux == Linux ]] 00:29:11.269 13:29:21 -- spdk/autotest.sh@197 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:29:11.269 13:29:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:11.269 13:29:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:11.269 13:29:21 -- common/autotest_common.sh@10 -- # set +x 00:29:11.269 ************************************ 00:29:11.269 START TEST reactor_set_interrupt 00:29:11.269 ************************************ 00:29:11.269 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:29:11.531 * Looking for test storage... 00:29:11.531 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:11.531 13:29:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:29:11.531 13:29:21 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:29:11.531 13:29:21 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:11.531 13:29:21 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:11.531 13:29:21 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:29:11.531 13:29:21 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:11.531 13:29:21 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:29:11.531 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:29:11.531 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:29:11.531 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:29:11.531 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:29:11.531 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:29:11.531 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:29:11.531 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:29:11.531 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:29:11.531 13:29:21 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:29:11.531 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:29:11.531 #define SPDK_CONFIG_H 00:29:11.531 #define SPDK_CONFIG_APPS 1 00:29:11.531 #define SPDK_CONFIG_ARCH native 00:29:11.531 #undef SPDK_CONFIG_ASAN 00:29:11.531 #undef SPDK_CONFIG_AVAHI 00:29:11.531 #undef SPDK_CONFIG_CET 00:29:11.531 #define SPDK_CONFIG_COVERAGE 1 00:29:11.531 #define SPDK_CONFIG_CROSS_PREFIX 00:29:11.531 #define SPDK_CONFIG_CRYPTO 1 00:29:11.531 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:29:11.531 #undef SPDK_CONFIG_CUSTOMOCF 00:29:11.531 #undef SPDK_CONFIG_DAOS 00:29:11.531 #define SPDK_CONFIG_DAOS_DIR 00:29:11.531 #define SPDK_CONFIG_DEBUG 1 00:29:11.531 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:29:11.531 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:29:11.531 #define SPDK_CONFIG_DPDK_INC_DIR 00:29:11.531 #define SPDK_CONFIG_DPDK_LIB_DIR 00:29:11.531 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:29:11.531 #undef SPDK_CONFIG_DPDK_UADK 00:29:11.531 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:29:11.531 #define SPDK_CONFIG_EXAMPLES 1 00:29:11.531 #undef SPDK_CONFIG_FC 00:29:11.531 #define SPDK_CONFIG_FC_PATH 00:29:11.531 #define SPDK_CONFIG_FIO_PLUGIN 1 00:29:11.531 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:29:11.531 #undef SPDK_CONFIG_FUSE 00:29:11.531 #undef SPDK_CONFIG_FUZZER 00:29:11.531 #define SPDK_CONFIG_FUZZER_LIB 00:29:11.531 #undef SPDK_CONFIG_GOLANG 00:29:11.531 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:29:11.531 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:29:11.531 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:29:11.531 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:29:11.531 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:29:11.531 #undef SPDK_CONFIG_HAVE_LIBBSD 00:29:11.531 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:29:11.531 #define SPDK_CONFIG_IDXD 1 00:29:11.531 #define SPDK_CONFIG_IDXD_KERNEL 1 00:29:11.531 #define SPDK_CONFIG_IPSEC_MB 1 00:29:11.531 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:29:11.531 #define SPDK_CONFIG_ISAL 1 00:29:11.531 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:29:11.531 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:29:11.531 #define SPDK_CONFIG_LIBDIR 00:29:11.531 #undef SPDK_CONFIG_LTO 00:29:11.531 #define SPDK_CONFIG_MAX_LCORES 128 00:29:11.531 #define SPDK_CONFIG_NVME_CUSE 1 00:29:11.531 #undef SPDK_CONFIG_OCF 00:29:11.531 #define SPDK_CONFIG_OCF_PATH 00:29:11.531 #define SPDK_CONFIG_OPENSSL_PATH 00:29:11.531 #undef SPDK_CONFIG_PGO_CAPTURE 00:29:11.531 #define SPDK_CONFIG_PGO_DIR 00:29:11.531 #undef SPDK_CONFIG_PGO_USE 00:29:11.531 #define SPDK_CONFIG_PREFIX /usr/local 00:29:11.531 #undef SPDK_CONFIG_RAID5F 00:29:11.531 #undef SPDK_CONFIG_RBD 00:29:11.531 #define SPDK_CONFIG_RDMA 1 00:29:11.531 #define SPDK_CONFIG_RDMA_PROV verbs 00:29:11.531 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:29:11.531 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:29:11.531 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:29:11.531 #define SPDK_CONFIG_SHARED 1 00:29:11.531 #undef SPDK_CONFIG_SMA 00:29:11.531 #define SPDK_CONFIG_TESTS 1 00:29:11.531 #undef SPDK_CONFIG_TSAN 00:29:11.531 #define SPDK_CONFIG_UBLK 1 00:29:11.531 #define SPDK_CONFIG_UBSAN 1 00:29:11.531 #undef SPDK_CONFIG_UNIT_TESTS 00:29:11.531 #undef SPDK_CONFIG_URING 00:29:11.531 #define SPDK_CONFIG_URING_PATH 00:29:11.531 #undef SPDK_CONFIG_URING_ZNS 00:29:11.531 #undef SPDK_CONFIG_USDT 00:29:11.531 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:29:11.531 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:29:11.531 #undef SPDK_CONFIG_VFIO_USER 00:29:11.531 #define SPDK_CONFIG_VFIO_USER_DIR 00:29:11.531 #define SPDK_CONFIG_VHOST 1 00:29:11.531 #define SPDK_CONFIG_VIRTIO 1 00:29:11.531 #undef SPDK_CONFIG_VTUNE 00:29:11.531 #define SPDK_CONFIG_VTUNE_DIR 00:29:11.531 #define SPDK_CONFIG_WERROR 1 00:29:11.531 #define SPDK_CONFIG_WPDK_DIR 00:29:11.531 #undef SPDK_CONFIG_XNVME 00:29:11.531 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:29:11.531 13:29:21 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:29:11.531 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:29:11.531 13:29:21 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.531 13:29:21 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.531 13:29:21 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.531 13:29:21 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.531 13:29:21 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.531 13:29:21 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.531 13:29:21 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:29:11.531 13:29:21 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.531 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:29:11.531 13:29:21 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:29:11.531 13:29:21 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:29:11.532 13:29:21 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 1 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@202 -- # cat 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export valgrind= 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@265 -- # valgrind= 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@271 -- # uname -s 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@281 -- # MAKE=make 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@301 -- # TEST_MODE= 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@320 -- # [[ -z 1026708 ]] 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@320 -- # kill -0 1026708 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:29:11.532 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local mount target_dir 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.9UDfiR 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.9UDfiR/tests/interrupt /tmp/spdk.9UDfiR 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@329 -- # df -T 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=954302464 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4330127360 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=55060529152 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742305280 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=6681776128 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30866341888 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871150592 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4808704 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:11.533 13:29:21 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=12338663424 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348461056 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=9797632 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30869995520 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871154688 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=1159168 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=6174224384 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174228480 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:29:11.533 * Looking for test storage... 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@370 -- # local target_space new_size 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:11.533 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mount=/ 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@376 -- # target_space=55060529152 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@383 -- # new_size=8896368640 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:11.793 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@391 -- # return 0 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1026807 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:11.793 13:29:22 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1026807 /var/tmp/spdk.sock 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1026807 ']' 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:11.793 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:11.793 [2024-07-25 13:29:22.066308] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:11.793 [2024-07-25 13:29:22.066369] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026807 ] 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.793 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:11.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.794 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:11.794 [2024-07-25 13:29:22.199382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:12.053 [2024-07-25 13:29:22.284377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.053 [2024-07-25 13:29:22.284492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:12.053 [2024-07-25 13:29:22.284496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.053 [2024-07-25 13:29:22.353647] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:12.621 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:12.621 13:29:22 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:29:12.621 13:29:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:29:12.621 13:29:22 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:12.880 Malloc0 00:29:12.880 Malloc1 00:29:12.880 Malloc2 00:29:12.880 13:29:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:29:12.880 13:29:23 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:29:12.880 13:29:23 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:12.880 13:29:23 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:12.880 5000+0 records in 00:29:12.880 5000+0 records out 00:29:12.880 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0243598 s, 420 MB/s 00:29:12.880 13:29:23 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:29:13.139 AIO0 00:29:13.139 13:29:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 1026807 00:29:13.139 13:29:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 1026807 without_thd 00:29:13.139 13:29:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=1026807 00:29:13.139 13:29:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:29:13.139 13:29:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:29:13.139 13:29:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:29:13.139 13:29:23 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:29:13.139 13:29:23 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:29:13.139 13:29:23 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:29:13.139 13:29:23 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:13.139 13:29:23 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:29:13.139 13:29:23 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:13.398 13:29:23 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:29:13.398 13:29:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:29:13.398 13:29:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:29:13.398 13:29:23 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:29:13.398 13:29:23 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:29:13.398 13:29:23 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:29:13.398 13:29:23 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:13.398 13:29:23 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:29:13.398 13:29:23 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:29:13.657 spdk_thread ids are 1 on reactor0. 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1026807 0 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1026807 0 idle 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1026807 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1026807 -w 256 00:29:13.657 13:29:24 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1026807 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.37 reactor_0' 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1026807 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.37 reactor_0 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1026807 1 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1026807 1 idle 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1026807 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1026807 -w 256 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1026853 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.00 reactor_1' 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1026853 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.00 reactor_1 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1026807 2 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1026807 2 idle 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1026807 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:13.917 13:29:24 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:13.918 13:29:24 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:13.918 13:29:24 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:13.918 13:29:24 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:13.918 13:29:24 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:13.918 13:29:24 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1026807 -w 256 00:29:13.918 13:29:24 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1026854 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.00 reactor_2' 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1026854 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.00 reactor_2 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:29:14.176 13:29:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:29:14.434 [2024-07-25 13:29:24.781794] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:14.434 13:29:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:29:14.691 [2024-07-25 13:29:25.009264] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:29:14.691 [2024-07-25 13:29:25.009456] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:14.691 13:29:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:29:14.980 [2024-07-25 13:29:25.233064] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:29:14.980 [2024-07-25 13:29:25.233179] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1026807 0 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1026807 0 busy 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1026807 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1026807 -w 256 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1026807 root 20 0 128.2g 34944 22400 R 99.9 0.1 0:00.79 reactor_0' 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1026807 root 20 0 128.2g 34944 22400 R 99.9 0.1 0:00.79 reactor_0 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1026807 2 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1026807 2 busy 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1026807 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1026807 -w 256 00:29:14.980 13:29:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:15.238 13:29:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1026854 root 20 0 128.2g 34944 22400 R 99.9 0.1 0:00.36 reactor_2' 00:29:15.238 13:29:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1026854 root 20 0 128.2g 34944 22400 R 99.9 0.1 0:00.36 reactor_2 00:29:15.238 13:29:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:15.238 13:29:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:15.238 13:29:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:29:15.238 13:29:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:29:15.238 13:29:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:29:15.238 13:29:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:29:15.238 13:29:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:29:15.238 13:29:25 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:15.238 13:29:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:29:15.497 [2024-07-25 13:29:25.825048] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:29:15.497 [2024-07-25 13:29:25.825130] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 1026807 2 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1026807 2 idle 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1026807 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1026807 -w 256 00:29:15.497 13:29:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:15.755 13:29:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1026854 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.59 reactor_2' 00:29:15.755 13:29:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:15.755 13:29:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1026854 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.59 reactor_2 00:29:15.755 13:29:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:15.755 13:29:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:15.755 13:29:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:15.755 13:29:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:15.755 13:29:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:15.755 13:29:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:15.755 13:29:26 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:15.755 13:29:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:29:15.755 [2024-07-25 13:29:26.233045] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:29:15.755 [2024-07-25 13:29:26.233150] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:16.014 13:29:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:29:16.014 13:29:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:29:16.015 [2024-07-25 13:29:26.461722] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 1026807 0 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1026807 0 idle 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1026807 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:16.015 13:29:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1026807 -w 256 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1026807 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:01.60 reactor_0' 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1026807 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:01.60 reactor_0 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:29:16.274 13:29:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 1026807 00:29:16.274 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1026807 ']' 00:29:16.274 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 1026807 00:29:16.274 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:29:16.274 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:16.274 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1026807 00:29:16.274 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:16.274 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:16.274 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1026807' 00:29:16.274 killing process with pid 1026807 00:29:16.274 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 1026807 00:29:16.274 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 1026807 00:29:16.533 13:29:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:29:16.533 13:29:26 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:29:16.533 13:29:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:29:16.533 13:29:26 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.533 13:29:26 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:29:16.533 13:29:26 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1027713 00:29:16.533 13:29:26 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:16.533 13:29:26 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:16.533 13:29:26 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1027713 /var/tmp/spdk.sock 00:29:16.533 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1027713 ']' 00:29:16.533 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.533 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:16.533 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.533 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:16.533 13:29:26 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:16.533 [2024-07-25 13:29:26.980935] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:16.533 [2024-07-25 13:29:26.980997] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027713 ] 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:16.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.792 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:16.792 [2024-07-25 13:29:27.113640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:16.792 [2024-07-25 13:29:27.197884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.792 [2024-07-25 13:29:27.197977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.792 [2024-07-25 13:29:27.197981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.792 [2024-07-25 13:29:27.266936] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:17.729 13:29:27 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:17.729 13:29:27 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:29:17.729 13:29:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:29:17.729 13:29:27 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:17.729 Malloc0 00:29:17.729 Malloc1 00:29:17.729 Malloc2 00:29:17.729 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:29:17.729 13:29:28 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:29:17.729 13:29:28 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:17.729 13:29:28 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:17.729 5000+0 records in 00:29:17.729 5000+0 records out 00:29:17.729 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0242679 s, 422 MB/s 00:29:17.730 13:29:28 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:29:17.988 AIO0 00:29:17.988 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 1027713 00:29:17.988 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 1027713 00:29:17.988 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=1027713 00:29:17.988 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:29:17.988 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:29:17.988 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:29:17.988 13:29:28 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:29:17.988 13:29:28 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:29:17.988 13:29:28 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:29:17.988 13:29:28 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:17.988 13:29:28 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:17.988 13:29:28 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:29:18.247 13:29:28 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:29:18.247 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:29:18.247 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:29:18.247 13:29:28 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:29:18.247 13:29:28 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:29:18.247 13:29:28 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:29:18.247 13:29:28 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:18.247 13:29:28 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:29:18.247 13:29:28 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:29:18.505 spdk_thread ids are 1 on reactor0. 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1027713 0 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1027713 0 idle 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1027713 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:18.505 13:29:28 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:18.506 13:29:28 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1027713 -w 256 00:29:18.506 13:29:28 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:18.764 13:29:29 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1027713 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.37 reactor_0' 00:29:18.764 13:29:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1027713 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.37 reactor_0 00:29:18.764 13:29:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:18.764 13:29:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1027713 1 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1027713 1 idle 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1027713 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1027713 -w 256 00:29:18.765 13:29:29 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1027754 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.00 reactor_1' 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1027754 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.00 reactor_1 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1027713 2 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1027713 2 idle 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1027713 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:19.023 13:29:29 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1027713 -w 256 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1027755 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.00 reactor_2' 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1027755 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.00 reactor_2 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:29:19.024 13:29:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:29:19.282 [2024-07-25 13:29:29.682570] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:29:19.282 [2024-07-25 13:29:29.682772] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:29:19.282 [2024-07-25 13:29:29.682952] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:19.282 13:29:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:29:19.540 [2024-07-25 13:29:29.911039] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:29:19.540 [2024-07-25 13:29:29.911232] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:19.540 13:29:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:19.540 13:29:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1027713 0 00:29:19.540 13:29:29 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1027713 0 busy 00:29:19.540 13:29:29 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1027713 00:29:19.540 13:29:29 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:19.540 13:29:29 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:19.540 13:29:29 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:29:19.540 13:29:29 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:19.540 13:29:29 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:19.540 13:29:29 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:19.540 13:29:29 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1027713 -w 256 00:29:19.540 13:29:29 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1027713 root 20 0 128.2g 34944 22400 R 99.9 0.1 0:00.78 reactor_0' 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1027713 root 20 0 128.2g 34944 22400 R 99.9 0.1 0:00.78 reactor_0 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1027713 2 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1027713 2 busy 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1027713 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1027713 -w 256 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1027755 root 20 0 128.2g 34944 22400 R 99.9 0.1 0:00.36 reactor_2' 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1027755 root 20 0 128.2g 34944 22400 R 99.9 0.1 0:00.36 reactor_2 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:19.798 13:29:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:29:20.057 [2024-07-25 13:29:30.504704] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:29:20.057 [2024-07-25 13:29:30.504814] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 1027713 2 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1027713 2 idle 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1027713 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1027713 -w 256 00:29:20.057 13:29:30 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:20.315 13:29:30 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1027755 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.59 reactor_2' 00:29:20.315 13:29:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:20.316 13:29:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1027755 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:00.59 reactor_2 00:29:20.316 13:29:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:20.316 13:29:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:20.316 13:29:30 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:20.316 13:29:30 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:20.316 13:29:30 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:20.316 13:29:30 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:20.316 13:29:30 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:20.316 13:29:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:29:20.576 [2024-07-25 13:29:30.913759] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:29:20.576 [2024-07-25 13:29:30.913938] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:29:20.576 [2024-07-25 13:29:30.913961] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 1027713 0 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1027713 0 idle 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1027713 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1027713 -w 256 00:29:20.576 13:29:30 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1027713 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:01.60 reactor_0' 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1027713 root 20 0 128.2g 34944 22400 S 0.0 0.1 0:01.60 reactor_0 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:20.921 13:29:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 1027713 00:29:20.921 13:29:31 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1027713 ']' 00:29:20.921 13:29:31 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 1027713 00:29:20.921 13:29:31 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:29:20.921 13:29:31 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:20.922 13:29:31 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1027713 00:29:20.922 13:29:31 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:20.922 13:29:31 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:20.922 13:29:31 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1027713' 00:29:20.922 killing process with pid 1027713 00:29:20.922 13:29:31 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 1027713 00:29:20.922 13:29:31 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 1027713 00:29:20.922 13:29:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:29:20.922 13:29:31 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:29:20.922 00:29:20.922 real 0m9.663s 00:29:20.922 user 0m8.949s 00:29:20.922 sys 0m2.096s 00:29:20.922 13:29:31 reactor_set_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.922 13:29:31 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:20.922 ************************************ 00:29:20.922 END TEST reactor_set_interrupt 00:29:20.922 ************************************ 00:29:21.181 13:29:31 -- spdk/autotest.sh@198 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:21.181 13:29:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:21.181 13:29:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:21.181 13:29:31 -- common/autotest_common.sh@10 -- # set +x 00:29:21.181 ************************************ 00:29:21.181 START TEST reap_unregistered_poller 00:29:21.181 ************************************ 00:29:21.181 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:21.181 * Looking for test storage... 00:29:21.181 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:21.181 13:29:31 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:29:21.181 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:21.181 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:21.181 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:21.181 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:29:21.181 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:21.181 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:29:21.181 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:29:21.181 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:29:21.181 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:29:21.181 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:29:21.181 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:29:21.181 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:29:21.181 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:29:21.181 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:29:21.181 13:29:31 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:29:21.182 13:29:31 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:29:21.182 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:29:21.182 #define SPDK_CONFIG_H 00:29:21.182 #define SPDK_CONFIG_APPS 1 00:29:21.182 #define SPDK_CONFIG_ARCH native 00:29:21.182 #undef SPDK_CONFIG_ASAN 00:29:21.182 #undef SPDK_CONFIG_AVAHI 00:29:21.182 #undef SPDK_CONFIG_CET 00:29:21.182 #define SPDK_CONFIG_COVERAGE 1 00:29:21.182 #define SPDK_CONFIG_CROSS_PREFIX 00:29:21.182 #define SPDK_CONFIG_CRYPTO 1 00:29:21.182 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:29:21.182 #undef SPDK_CONFIG_CUSTOMOCF 00:29:21.182 #undef SPDK_CONFIG_DAOS 00:29:21.182 #define SPDK_CONFIG_DAOS_DIR 00:29:21.182 #define SPDK_CONFIG_DEBUG 1 00:29:21.182 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:29:21.182 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:29:21.182 #define SPDK_CONFIG_DPDK_INC_DIR 00:29:21.182 #define SPDK_CONFIG_DPDK_LIB_DIR 00:29:21.182 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:29:21.182 #undef SPDK_CONFIG_DPDK_UADK 00:29:21.182 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:29:21.182 #define SPDK_CONFIG_EXAMPLES 1 00:29:21.182 #undef SPDK_CONFIG_FC 00:29:21.182 #define SPDK_CONFIG_FC_PATH 00:29:21.182 #define SPDK_CONFIG_FIO_PLUGIN 1 00:29:21.182 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:29:21.182 #undef SPDK_CONFIG_FUSE 00:29:21.182 #undef SPDK_CONFIG_FUZZER 00:29:21.182 #define SPDK_CONFIG_FUZZER_LIB 00:29:21.182 #undef SPDK_CONFIG_GOLANG 00:29:21.182 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:29:21.182 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:29:21.182 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:29:21.182 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:29:21.182 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:29:21.182 #undef SPDK_CONFIG_HAVE_LIBBSD 00:29:21.182 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:29:21.182 #define SPDK_CONFIG_IDXD 1 00:29:21.182 #define SPDK_CONFIG_IDXD_KERNEL 1 00:29:21.182 #define SPDK_CONFIG_IPSEC_MB 1 00:29:21.182 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:29:21.182 #define SPDK_CONFIG_ISAL 1 00:29:21.182 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:29:21.182 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:29:21.182 #define SPDK_CONFIG_LIBDIR 00:29:21.182 #undef SPDK_CONFIG_LTO 00:29:21.182 #define SPDK_CONFIG_MAX_LCORES 128 00:29:21.182 #define SPDK_CONFIG_NVME_CUSE 1 00:29:21.182 #undef SPDK_CONFIG_OCF 00:29:21.182 #define SPDK_CONFIG_OCF_PATH 00:29:21.182 #define SPDK_CONFIG_OPENSSL_PATH 00:29:21.182 #undef SPDK_CONFIG_PGO_CAPTURE 00:29:21.182 #define SPDK_CONFIG_PGO_DIR 00:29:21.182 #undef SPDK_CONFIG_PGO_USE 00:29:21.182 #define SPDK_CONFIG_PREFIX /usr/local 00:29:21.182 #undef SPDK_CONFIG_RAID5F 00:29:21.182 #undef SPDK_CONFIG_RBD 00:29:21.182 #define SPDK_CONFIG_RDMA 1 00:29:21.182 #define SPDK_CONFIG_RDMA_PROV verbs 00:29:21.182 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:29:21.182 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:29:21.182 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:29:21.182 #define SPDK_CONFIG_SHARED 1 00:29:21.182 #undef SPDK_CONFIG_SMA 00:29:21.182 #define SPDK_CONFIG_TESTS 1 00:29:21.182 #undef SPDK_CONFIG_TSAN 00:29:21.182 #define SPDK_CONFIG_UBLK 1 00:29:21.182 #define SPDK_CONFIG_UBSAN 1 00:29:21.182 #undef SPDK_CONFIG_UNIT_TESTS 00:29:21.182 #undef SPDK_CONFIG_URING 00:29:21.182 #define SPDK_CONFIG_URING_PATH 00:29:21.182 #undef SPDK_CONFIG_URING_ZNS 00:29:21.182 #undef SPDK_CONFIG_USDT 00:29:21.182 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:29:21.182 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:29:21.182 #undef SPDK_CONFIG_VFIO_USER 00:29:21.182 #define SPDK_CONFIG_VFIO_USER_DIR 00:29:21.182 #define SPDK_CONFIG_VHOST 1 00:29:21.182 #define SPDK_CONFIG_VIRTIO 1 00:29:21.182 #undef SPDK_CONFIG_VTUNE 00:29:21.182 #define SPDK_CONFIG_VTUNE_DIR 00:29:21.182 #define SPDK_CONFIG_WERROR 1 00:29:21.182 #define SPDK_CONFIG_WPDK_DIR 00:29:21.182 #undef SPDK_CONFIG_XNVME 00:29:21.182 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:29:21.182 13:29:31 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:29:21.182 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:29:21.182 13:29:31 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.182 13:29:31 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.182 13:29:31 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.182 13:29:31 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.182 13:29:31 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.182 13:29:31 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.182 13:29:31 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:29:21.183 13:29:31 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.183 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:29:21.183 13:29:31 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:29:21.183 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:29:21.183 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:29:21.183 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:29:21.183 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:29:21.445 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 1 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@202 -- # cat 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:29:21.446 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export valgrind= 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@265 -- # valgrind= 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@271 -- # uname -s 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@281 -- # MAKE=make 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@301 -- # TEST_MODE= 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@320 -- # [[ -z 1028548 ]] 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@320 -- # kill -0 1028548 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local mount target_dir 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.mrOwhj 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.mrOwhj/tests/interrupt /tmp/spdk.mrOwhj 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@329 -- # df -T 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=954302464 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4330127360 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=55060365312 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742305280 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=6681939968 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30866341888 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871150592 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4808704 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=12338663424 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348461056 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=9797632 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30869995520 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871154688 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=1159168 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=6174224384 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174228480 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:29:21.447 * Looking for test storage... 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@370 -- # local target_space new_size 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mount=/ 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@376 -- # target_space=55060365312 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@383 -- # new_size=8896532480 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:21.447 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:21.447 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@391 -- # return 0 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1028697 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:21.448 13:29:31 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1028697 /var/tmp/spdk.sock 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@831 -- # '[' -z 1028697 ']' 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:21.448 13:29:31 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:29:21.448 [2024-07-25 13:29:31.823984] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:21.448 [2024-07-25 13:29:31.824052] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028697 ] 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:21.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:21.448 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:21.708 [2024-07-25 13:29:31.957383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:21.708 [2024-07-25 13:29:32.046355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.708 [2024-07-25 13:29:32.046450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.708 [2024-07-25 13:29:32.046454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.708 [2024-07-25 13:29:32.115368] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:22.276 13:29:32 reap_unregistered_poller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:22.276 13:29:32 reap_unregistered_poller -- common/autotest_common.sh@864 -- # return 0 00:29:22.276 13:29:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:29:22.276 13:29:32 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.276 13:29:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:29:22.276 13:29:32 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:29:22.535 13:29:32 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.535 13:29:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:29:22.535 "name": "app_thread", 00:29:22.535 "id": 1, 00:29:22.535 "active_pollers": [], 00:29:22.535 "timed_pollers": [ 00:29:22.535 { 00:29:22.535 "name": "rpc_subsystem_poll_servers", 00:29:22.535 "id": 1, 00:29:22.535 "state": "waiting", 00:29:22.535 "run_count": 0, 00:29:22.535 "busy_count": 0, 00:29:22.535 "period_ticks": 10000000 00:29:22.535 } 00:29:22.535 ], 00:29:22.535 "paused_pollers": [] 00:29:22.535 }' 00:29:22.535 13:29:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:29:22.535 13:29:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:29:22.535 13:29:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:29:22.535 13:29:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:29:22.535 13:29:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:29:22.535 13:29:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:29:22.535 13:29:32 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:29:22.535 13:29:32 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:22.535 13:29:32 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:22.535 5000+0 records in 00:29:22.535 5000+0 records out 00:29:22.535 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0234373 s, 437 MB/s 00:29:22.535 13:29:32 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:29:22.793 AIO0 00:29:22.793 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:23.052 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:29:23.052 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:29:23.052 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:29:23.052 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.052 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:29:23.052 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.311 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:29:23.311 "name": "app_thread", 00:29:23.311 "id": 1, 00:29:23.311 "active_pollers": [], 00:29:23.311 "timed_pollers": [ 00:29:23.311 { 00:29:23.311 "name": "rpc_subsystem_poll_servers", 00:29:23.311 "id": 1, 00:29:23.311 "state": "waiting", 00:29:23.311 "run_count": 0, 00:29:23.311 "busy_count": 0, 00:29:23.311 "period_ticks": 10000000 00:29:23.311 } 00:29:23.311 ], 00:29:23.311 "paused_pollers": [] 00:29:23.311 }' 00:29:23.311 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:29:23.311 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:29:23.311 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:29:23.311 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:29:23.311 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:29:23.311 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:29:23.311 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:29:23.311 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 1028697 00:29:23.311 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@950 -- # '[' -z 1028697 ']' 00:29:23.311 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@954 -- # kill -0 1028697 00:29:23.311 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@955 -- # uname 00:29:23.311 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:23.311 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1028697 00:29:23.311 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:23.311 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:23.311 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1028697' 00:29:23.311 killing process with pid 1028697 00:29:23.311 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@969 -- # kill 1028697 00:29:23.311 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@974 -- # wait 1028697 00:29:23.570 13:29:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:29:23.570 13:29:33 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:29:23.570 00:29:23.570 real 0m2.428s 00:29:23.570 user 0m1.486s 00:29:23.570 sys 0m0.676s 00:29:23.570 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.570 13:29:33 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:29:23.570 ************************************ 00:29:23.570 END TEST reap_unregistered_poller 00:29:23.570 ************************************ 00:29:23.570 13:29:33 -- spdk/autotest.sh@202 -- # uname -s 00:29:23.570 13:29:33 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:29:23.570 13:29:33 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:29:23.570 13:29:33 -- spdk/autotest.sh@209 -- # [[ 1 -eq 0 ]] 00:29:23.570 13:29:33 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@264 -- # timing_exit lib 00:29:23.570 13:29:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:23.570 13:29:33 -- common/autotest_common.sh@10 -- # set +x 00:29:23.570 13:29:33 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@351 -- # '[' 1 -eq 1 ']' 00:29:23.570 13:29:33 -- spdk/autotest.sh@352 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:29:23.570 13:29:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:23.570 13:29:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:23.570 13:29:33 -- common/autotest_common.sh@10 -- # set +x 00:29:23.570 ************************************ 00:29:23.570 START TEST compress_compdev 00:29:23.570 ************************************ 00:29:23.570 13:29:34 compress_compdev -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:29:23.830 * Looking for test storage... 00:29:23.830 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:29:23.830 13:29:34 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:29:23.830 13:29:34 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.830 13:29:34 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.830 13:29:34 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.830 13:29:34 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.830 13:29:34 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.830 13:29:34 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.830 13:29:34 compress_compdev -- paths/export.sh@5 -- # export PATH 00:29:23.830 13:29:34 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.830 13:29:34 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.830 13:29:34 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:23.830 13:29:34 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:29:23.830 13:29:34 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:29:23.830 13:29:34 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:29:23.830 13:29:34 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:29:23.830 13:29:34 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1029175 00:29:23.830 13:29:34 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:23.830 13:29:34 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1029175 00:29:23.830 13:29:34 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 1029175 ']' 00:29:23.830 13:29:34 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.830 13:29:34 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:23.830 13:29:34 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.830 13:29:34 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:23.830 13:29:34 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:23.830 13:29:34 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:29:23.830 [2024-07-25 13:29:34.225903] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:23.830 [2024-07-25 13:29:34.225964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029175 ] 00:29:23.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.830 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:23.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.830 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:23.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.830 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:23.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.830 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:23.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.830 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:23.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:23.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:23.831 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:24.090 [2024-07-25 13:29:34.346085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:24.090 [2024-07-25 13:29:34.433881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.090 [2024-07-25 13:29:34.433886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.657 [2024-07-25 13:29:35.112908] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:29:24.916 13:29:35 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.916 13:29:35 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:29:24.916 13:29:35 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:29:24.916 13:29:35 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:24.916 13:29:35 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:28.204 [2024-07-25 13:29:38.263518] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2541f40 PMD being used: compress_qat 00:29:28.204 13:29:38 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:28.204 13:29:38 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:28.204 13:29:38 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:28.204 13:29:38 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:28.204 13:29:38 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:28.204 13:29:38 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:28.204 13:29:38 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:28.204 13:29:38 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:28.204 [ 00:29:28.204 { 00:29:28.204 "name": "Nvme0n1", 00:29:28.204 "aliases": [ 00:29:28.204 "3fa6274f-a5e5-4b15-9204-d33cef7b9e98" 00:29:28.204 ], 00:29:28.204 "product_name": "NVMe disk", 00:29:28.204 "block_size": 512, 00:29:28.204 "num_blocks": 3907029168, 00:29:28.204 "uuid": "3fa6274f-a5e5-4b15-9204-d33cef7b9e98", 00:29:28.204 "assigned_rate_limits": { 00:29:28.204 "rw_ios_per_sec": 0, 00:29:28.204 "rw_mbytes_per_sec": 0, 00:29:28.204 "r_mbytes_per_sec": 0, 00:29:28.204 "w_mbytes_per_sec": 0 00:29:28.204 }, 00:29:28.204 "claimed": false, 00:29:28.204 "zoned": false, 00:29:28.204 "supported_io_types": { 00:29:28.204 "read": true, 00:29:28.204 "write": true, 00:29:28.204 "unmap": true, 00:29:28.204 "flush": true, 00:29:28.204 "reset": true, 00:29:28.204 "nvme_admin": true, 00:29:28.204 "nvme_io": true, 00:29:28.204 "nvme_io_md": false, 00:29:28.204 "write_zeroes": true, 00:29:28.204 "zcopy": false, 00:29:28.204 "get_zone_info": false, 00:29:28.204 "zone_management": false, 00:29:28.204 "zone_append": false, 00:29:28.204 "compare": false, 00:29:28.204 "compare_and_write": false, 00:29:28.204 "abort": true, 00:29:28.204 "seek_hole": false, 00:29:28.204 "seek_data": false, 00:29:28.204 "copy": false, 00:29:28.204 "nvme_iov_md": false 00:29:28.204 }, 00:29:28.204 "driver_specific": { 00:29:28.204 "nvme": [ 00:29:28.204 { 00:29:28.204 "pci_address": "0000:d8:00.0", 00:29:28.204 "trid": { 00:29:28.204 "trtype": "PCIe", 00:29:28.204 "traddr": "0000:d8:00.0" 00:29:28.204 }, 00:29:28.204 "ctrlr_data": { 00:29:28.204 "cntlid": 0, 00:29:28.204 "vendor_id": "0x8086", 00:29:28.204 "model_number": "INTEL SSDPE2KX020T8", 00:29:28.204 "serial_number": "BTLJ125505KA2P0BGN", 00:29:28.204 "firmware_revision": "VDV10170", 00:29:28.204 "oacs": { 00:29:28.204 "security": 0, 00:29:28.204 "format": 1, 00:29:28.204 "firmware": 1, 00:29:28.204 "ns_manage": 1 00:29:28.204 }, 00:29:28.204 "multi_ctrlr": false, 00:29:28.204 "ana_reporting": false 00:29:28.204 }, 00:29:28.204 "vs": { 00:29:28.204 "nvme_version": "1.2" 00:29:28.204 }, 00:29:28.204 "ns_data": { 00:29:28.204 "id": 1, 00:29:28.204 "can_share": false 00:29:28.204 } 00:29:28.204 } 00:29:28.204 ], 00:29:28.205 "mp_policy": "active_passive" 00:29:28.205 } 00:29:28.205 } 00:29:28.205 ] 00:29:28.463 13:29:38 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:28.463 13:29:38 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:28.463 [2024-07-25 13:29:38.912592] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2379180 PMD being used: compress_qat 00:29:29.840 b2c1c640-fd65-4722-97c2-ad6416e0237e 00:29:29.840 13:29:39 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:29.840 ce6cd5a4-085d-44a4-bf1f-b949e97a4aba 00:29:29.840 13:29:40 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:29.840 13:29:40 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:29.840 13:29:40 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:29.840 13:29:40 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:29.840 13:29:40 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:29.840 13:29:40 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:29.840 13:29:40 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:30.098 13:29:40 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:30.357 [ 00:29:30.357 { 00:29:30.357 "name": "ce6cd5a4-085d-44a4-bf1f-b949e97a4aba", 00:29:30.357 "aliases": [ 00:29:30.357 "lvs0/lv0" 00:29:30.357 ], 00:29:30.357 "product_name": "Logical Volume", 00:29:30.357 "block_size": 512, 00:29:30.357 "num_blocks": 204800, 00:29:30.357 "uuid": "ce6cd5a4-085d-44a4-bf1f-b949e97a4aba", 00:29:30.357 "assigned_rate_limits": { 00:29:30.357 "rw_ios_per_sec": 0, 00:29:30.357 "rw_mbytes_per_sec": 0, 00:29:30.357 "r_mbytes_per_sec": 0, 00:29:30.357 "w_mbytes_per_sec": 0 00:29:30.357 }, 00:29:30.357 "claimed": false, 00:29:30.357 "zoned": false, 00:29:30.357 "supported_io_types": { 00:29:30.357 "read": true, 00:29:30.357 "write": true, 00:29:30.357 "unmap": true, 00:29:30.357 "flush": false, 00:29:30.357 "reset": true, 00:29:30.357 "nvme_admin": false, 00:29:30.357 "nvme_io": false, 00:29:30.357 "nvme_io_md": false, 00:29:30.357 "write_zeroes": true, 00:29:30.357 "zcopy": false, 00:29:30.357 "get_zone_info": false, 00:29:30.357 "zone_management": false, 00:29:30.357 "zone_append": false, 00:29:30.357 "compare": false, 00:29:30.357 "compare_and_write": false, 00:29:30.357 "abort": false, 00:29:30.357 "seek_hole": true, 00:29:30.357 "seek_data": true, 00:29:30.357 "copy": false, 00:29:30.357 "nvme_iov_md": false 00:29:30.357 }, 00:29:30.357 "driver_specific": { 00:29:30.357 "lvol": { 00:29:30.357 "lvol_store_uuid": "b2c1c640-fd65-4722-97c2-ad6416e0237e", 00:29:30.357 "base_bdev": "Nvme0n1", 00:29:30.357 "thin_provision": true, 00:29:30.357 "num_allocated_clusters": 0, 00:29:30.357 "snapshot": false, 00:29:30.357 "clone": false, 00:29:30.357 "esnap_clone": false 00:29:30.357 } 00:29:30.357 } 00:29:30.357 } 00:29:30.357 ] 00:29:30.357 13:29:40 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:30.357 13:29:40 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:29:30.357 13:29:40 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:29:30.357 [2024-07-25 13:29:40.819965] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:30.357 COMP_lvs0/lv0 00:29:30.357 13:29:40 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:30.357 13:29:40 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:30.616 13:29:40 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:30.616 13:29:40 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:30.616 13:29:40 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:30.616 13:29:40 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:30.616 13:29:40 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:30.616 13:29:41 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:30.875 [ 00:29:30.875 { 00:29:30.875 "name": "COMP_lvs0/lv0", 00:29:30.875 "aliases": [ 00:29:30.875 "beda6c93-3156-51dd-a5be-e51d4b9a1b4e" 00:29:30.875 ], 00:29:30.875 "product_name": "compress", 00:29:30.875 "block_size": 512, 00:29:30.875 "num_blocks": 200704, 00:29:30.875 "uuid": "beda6c93-3156-51dd-a5be-e51d4b9a1b4e", 00:29:30.875 "assigned_rate_limits": { 00:29:30.875 "rw_ios_per_sec": 0, 00:29:30.875 "rw_mbytes_per_sec": 0, 00:29:30.875 "r_mbytes_per_sec": 0, 00:29:30.875 "w_mbytes_per_sec": 0 00:29:30.875 }, 00:29:30.875 "claimed": false, 00:29:30.875 "zoned": false, 00:29:30.875 "supported_io_types": { 00:29:30.875 "read": true, 00:29:30.875 "write": true, 00:29:30.875 "unmap": false, 00:29:30.875 "flush": false, 00:29:30.875 "reset": false, 00:29:30.875 "nvme_admin": false, 00:29:30.875 "nvme_io": false, 00:29:30.875 "nvme_io_md": false, 00:29:30.875 "write_zeroes": true, 00:29:30.875 "zcopy": false, 00:29:30.875 "get_zone_info": false, 00:29:30.875 "zone_management": false, 00:29:30.875 "zone_append": false, 00:29:30.875 "compare": false, 00:29:30.875 "compare_and_write": false, 00:29:30.875 "abort": false, 00:29:30.875 "seek_hole": false, 00:29:30.875 "seek_data": false, 00:29:30.875 "copy": false, 00:29:30.875 "nvme_iov_md": false 00:29:30.875 }, 00:29:30.875 "driver_specific": { 00:29:30.875 "compress": { 00:29:30.875 "name": "COMP_lvs0/lv0", 00:29:30.875 "base_bdev_name": "ce6cd5a4-085d-44a4-bf1f-b949e97a4aba", 00:29:30.875 "pm_path": "/tmp/pmem/7cb082b8-5bcb-4236-a229-35e4ff6914db" 00:29:30.875 } 00:29:30.875 } 00:29:30.875 } 00:29:30.875 ] 00:29:30.875 13:29:41 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:30.875 13:29:41 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:31.134 [2024-07-25 13:29:41.366224] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc2381b1600 PMD being used: compress_qat 00:29:31.134 [2024-07-25 13:29:41.368311] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x253e880 PMD being used: compress_qat 00:29:31.134 Running I/O for 3 seconds... 00:29:34.426 00:29:34.426 Latency(us) 00:29:34.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.426 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:29:34.426 Verification LBA range: start 0x0 length 0x3100 00:29:34.426 COMP_lvs0/lv0 : 3.01 4119.59 16.09 0.00 0.00 7712.91 130.25 13107.20 00:29:34.426 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:29:34.426 Verification LBA range: start 0x3100 length 0x3100 00:29:34.426 COMP_lvs0/lv0 : 3.01 4216.64 16.47 0.00 0.00 7551.39 117.96 13526.63 00:29:34.426 =================================================================================================================== 00:29:34.426 Total : 8336.23 32.56 0.00 0.00 7631.23 117.96 13526.63 00:29:34.426 0 00:29:34.426 13:29:44 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:29:34.426 13:29:44 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:34.426 13:29:44 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:34.426 13:29:44 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:34.426 13:29:44 compress_compdev -- compress/compress.sh@78 -- # killprocess 1029175 00:29:34.426 13:29:44 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 1029175 ']' 00:29:34.426 13:29:44 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 1029175 00:29:34.426 13:29:44 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:29:34.426 13:29:44 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:34.426 13:29:44 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1029175 00:29:34.685 13:29:44 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:34.685 13:29:44 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:34.685 13:29:44 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1029175' 00:29:34.685 killing process with pid 1029175 00:29:34.685 13:29:44 compress_compdev -- common/autotest_common.sh@969 -- # kill 1029175 00:29:34.685 Received shutdown signal, test time was about 3.000000 seconds 00:29:34.685 00:29:34.685 Latency(us) 00:29:34.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.685 =================================================================================================================== 00:29:34.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.685 13:29:44 compress_compdev -- common/autotest_common.sh@974 -- # wait 1029175 00:29:37.220 13:29:47 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:29:37.220 13:29:47 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:29:37.220 13:29:47 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1031312 00:29:37.220 13:29:47 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:37.220 13:29:47 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:29:37.220 13:29:47 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1031312 00:29:37.220 13:29:47 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 1031312 ']' 00:29:37.220 13:29:47 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.220 13:29:47 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:37.220 13:29:47 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.220 13:29:47 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:37.220 13:29:47 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:37.220 [2024-07-25 13:29:47.442993] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:37.220 [2024-07-25 13:29:47.443056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031312 ] 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.220 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:37.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:37.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.221 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:37.221 [2024-07-25 13:29:47.562551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:37.221 [2024-07-25 13:29:47.649010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.221 [2024-07-25 13:29:47.649016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.866 [2024-07-25 13:29:48.322484] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:29:38.125 13:29:48 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:38.125 13:29:48 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:29:38.125 13:29:48 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:29:38.125 13:29:48 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:38.125 13:29:48 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:41.414 [2024-07-25 13:29:51.471611] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1f29f40 PMD being used: compress_qat 00:29:41.414 13:29:51 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:41.414 13:29:51 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:41.414 13:29:51 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:41.414 13:29:51 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:41.414 13:29:51 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:41.414 13:29:51 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:41.414 13:29:51 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:41.414 13:29:51 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:41.673 [ 00:29:41.673 { 00:29:41.673 "name": "Nvme0n1", 00:29:41.673 "aliases": [ 00:29:41.673 "947ecf34-e7f6-401d-878d-b38b04d68f5f" 00:29:41.673 ], 00:29:41.673 "product_name": "NVMe disk", 00:29:41.673 "block_size": 512, 00:29:41.673 "num_blocks": 3907029168, 00:29:41.673 "uuid": "947ecf34-e7f6-401d-878d-b38b04d68f5f", 00:29:41.673 "assigned_rate_limits": { 00:29:41.673 "rw_ios_per_sec": 0, 00:29:41.673 "rw_mbytes_per_sec": 0, 00:29:41.673 "r_mbytes_per_sec": 0, 00:29:41.673 "w_mbytes_per_sec": 0 00:29:41.673 }, 00:29:41.673 "claimed": false, 00:29:41.673 "zoned": false, 00:29:41.673 "supported_io_types": { 00:29:41.673 "read": true, 00:29:41.673 "write": true, 00:29:41.673 "unmap": true, 00:29:41.673 "flush": true, 00:29:41.673 "reset": true, 00:29:41.673 "nvme_admin": true, 00:29:41.673 "nvme_io": true, 00:29:41.673 "nvme_io_md": false, 00:29:41.673 "write_zeroes": true, 00:29:41.673 "zcopy": false, 00:29:41.673 "get_zone_info": false, 00:29:41.673 "zone_management": false, 00:29:41.673 "zone_append": false, 00:29:41.673 "compare": false, 00:29:41.673 "compare_and_write": false, 00:29:41.673 "abort": true, 00:29:41.673 "seek_hole": false, 00:29:41.673 "seek_data": false, 00:29:41.673 "copy": false, 00:29:41.673 "nvme_iov_md": false 00:29:41.673 }, 00:29:41.673 "driver_specific": { 00:29:41.673 "nvme": [ 00:29:41.673 { 00:29:41.673 "pci_address": "0000:d8:00.0", 00:29:41.673 "trid": { 00:29:41.673 "trtype": "PCIe", 00:29:41.673 "traddr": "0000:d8:00.0" 00:29:41.673 }, 00:29:41.673 "ctrlr_data": { 00:29:41.673 "cntlid": 0, 00:29:41.673 "vendor_id": "0x8086", 00:29:41.673 "model_number": "INTEL SSDPE2KX020T8", 00:29:41.673 "serial_number": "BTLJ125505KA2P0BGN", 00:29:41.673 "firmware_revision": "VDV10170", 00:29:41.673 "oacs": { 00:29:41.673 "security": 0, 00:29:41.673 "format": 1, 00:29:41.673 "firmware": 1, 00:29:41.673 "ns_manage": 1 00:29:41.673 }, 00:29:41.673 "multi_ctrlr": false, 00:29:41.673 "ana_reporting": false 00:29:41.673 }, 00:29:41.673 "vs": { 00:29:41.673 "nvme_version": "1.2" 00:29:41.673 }, 00:29:41.673 "ns_data": { 00:29:41.673 "id": 1, 00:29:41.673 "can_share": false 00:29:41.673 } 00:29:41.673 } 00:29:41.673 ], 00:29:41.673 "mp_policy": "active_passive" 00:29:41.673 } 00:29:41.673 } 00:29:41.673 ] 00:29:41.673 13:29:51 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:41.673 13:29:51 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:41.673 [2024-07-25 13:29:52.120615] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1d61180 PMD being used: compress_qat 00:29:43.049 f4ae826e-5fdc-4be5-8d14-6767f70cba25 00:29:43.050 13:29:53 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:43.050 9e097803-83d9-4023-820c-046a899dfc7c 00:29:43.050 13:29:53 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:43.050 13:29:53 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:43.050 13:29:53 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:43.050 13:29:53 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:43.050 13:29:53 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:43.050 13:29:53 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:43.050 13:29:53 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:43.309 13:29:53 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:43.568 [ 00:29:43.568 { 00:29:43.568 "name": "9e097803-83d9-4023-820c-046a899dfc7c", 00:29:43.568 "aliases": [ 00:29:43.568 "lvs0/lv0" 00:29:43.568 ], 00:29:43.568 "product_name": "Logical Volume", 00:29:43.568 "block_size": 512, 00:29:43.568 "num_blocks": 204800, 00:29:43.568 "uuid": "9e097803-83d9-4023-820c-046a899dfc7c", 00:29:43.568 "assigned_rate_limits": { 00:29:43.568 "rw_ios_per_sec": 0, 00:29:43.568 "rw_mbytes_per_sec": 0, 00:29:43.568 "r_mbytes_per_sec": 0, 00:29:43.568 "w_mbytes_per_sec": 0 00:29:43.568 }, 00:29:43.568 "claimed": false, 00:29:43.568 "zoned": false, 00:29:43.568 "supported_io_types": { 00:29:43.568 "read": true, 00:29:43.568 "write": true, 00:29:43.568 "unmap": true, 00:29:43.568 "flush": false, 00:29:43.568 "reset": true, 00:29:43.568 "nvme_admin": false, 00:29:43.568 "nvme_io": false, 00:29:43.568 "nvme_io_md": false, 00:29:43.568 "write_zeroes": true, 00:29:43.568 "zcopy": false, 00:29:43.568 "get_zone_info": false, 00:29:43.568 "zone_management": false, 00:29:43.568 "zone_append": false, 00:29:43.568 "compare": false, 00:29:43.568 "compare_and_write": false, 00:29:43.568 "abort": false, 00:29:43.568 "seek_hole": true, 00:29:43.568 "seek_data": true, 00:29:43.568 "copy": false, 00:29:43.568 "nvme_iov_md": false 00:29:43.568 }, 00:29:43.568 "driver_specific": { 00:29:43.568 "lvol": { 00:29:43.568 "lvol_store_uuid": "f4ae826e-5fdc-4be5-8d14-6767f70cba25", 00:29:43.568 "base_bdev": "Nvme0n1", 00:29:43.568 "thin_provision": true, 00:29:43.568 "num_allocated_clusters": 0, 00:29:43.568 "snapshot": false, 00:29:43.568 "clone": false, 00:29:43.568 "esnap_clone": false 00:29:43.568 } 00:29:43.568 } 00:29:43.568 } 00:29:43.568 ] 00:29:43.568 13:29:53 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:43.568 13:29:53 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:29:43.568 13:29:53 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:29:43.568 [2024-07-25 13:29:54.039314] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:43.568 COMP_lvs0/lv0 00:29:43.568 13:29:54 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:43.568 13:29:54 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:43.568 13:29:54 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:43.568 13:29:54 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:43.568 13:29:54 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:43.568 13:29:54 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:43.568 13:29:54 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:43.827 13:29:54 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:44.085 [ 00:29:44.086 { 00:29:44.086 "name": "COMP_lvs0/lv0", 00:29:44.086 "aliases": [ 00:29:44.086 "8e63533b-07ba-5823-bec4-058a463fdfd1" 00:29:44.086 ], 00:29:44.086 "product_name": "compress", 00:29:44.086 "block_size": 512, 00:29:44.086 "num_blocks": 200704, 00:29:44.086 "uuid": "8e63533b-07ba-5823-bec4-058a463fdfd1", 00:29:44.086 "assigned_rate_limits": { 00:29:44.086 "rw_ios_per_sec": 0, 00:29:44.086 "rw_mbytes_per_sec": 0, 00:29:44.086 "r_mbytes_per_sec": 0, 00:29:44.086 "w_mbytes_per_sec": 0 00:29:44.086 }, 00:29:44.086 "claimed": false, 00:29:44.086 "zoned": false, 00:29:44.086 "supported_io_types": { 00:29:44.086 "read": true, 00:29:44.086 "write": true, 00:29:44.086 "unmap": false, 00:29:44.086 "flush": false, 00:29:44.086 "reset": false, 00:29:44.086 "nvme_admin": false, 00:29:44.086 "nvme_io": false, 00:29:44.086 "nvme_io_md": false, 00:29:44.086 "write_zeroes": true, 00:29:44.086 "zcopy": false, 00:29:44.086 "get_zone_info": false, 00:29:44.086 "zone_management": false, 00:29:44.086 "zone_append": false, 00:29:44.086 "compare": false, 00:29:44.086 "compare_and_write": false, 00:29:44.086 "abort": false, 00:29:44.086 "seek_hole": false, 00:29:44.086 "seek_data": false, 00:29:44.086 "copy": false, 00:29:44.086 "nvme_iov_md": false 00:29:44.086 }, 00:29:44.086 "driver_specific": { 00:29:44.086 "compress": { 00:29:44.086 "name": "COMP_lvs0/lv0", 00:29:44.086 "base_bdev_name": "9e097803-83d9-4023-820c-046a899dfc7c", 00:29:44.086 "pm_path": "/tmp/pmem/63b24585-6eb4-4c16-9e93-92a926226716" 00:29:44.086 } 00:29:44.086 } 00:29:44.086 } 00:29:44.086 ] 00:29:44.086 13:29:54 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:44.086 13:29:54 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:44.344 [2024-07-25 13:29:54.613600] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f29001b1600 PMD being used: compress_qat 00:29:44.344 [2024-07-25 13:29:54.615667] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1f26aa0 PMD being used: compress_qat 00:29:44.344 Running I/O for 3 seconds... 00:29:47.632 00:29:47.632 Latency(us) 00:29:47.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.632 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:29:47.632 Verification LBA range: start 0x0 length 0x3100 00:29:47.632 COMP_lvs0/lv0 : 3.00 4006.88 15.65 0.00 0.00 7940.72 127.80 15099.49 00:29:47.632 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:29:47.632 Verification LBA range: start 0x3100 length 0x3100 00:29:47.632 COMP_lvs0/lv0 : 3.00 4132.90 16.14 0.00 0.00 7707.37 121.24 14575.21 00:29:47.632 =================================================================================================================== 00:29:47.632 Total : 8139.79 31.80 0.00 0.00 7822.21 121.24 15099.49 00:29:47.632 0 00:29:47.632 13:29:57 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:29:47.632 13:29:57 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:47.632 13:29:57 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:47.632 13:29:58 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:47.632 13:29:58 compress_compdev -- compress/compress.sh@78 -- # killprocess 1031312 00:29:47.632 13:29:58 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 1031312 ']' 00:29:47.632 13:29:58 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 1031312 00:29:47.632 13:29:58 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:29:47.632 13:29:58 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:47.632 13:29:58 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1031312 00:29:47.891 13:29:58 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:47.891 13:29:58 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:47.891 13:29:58 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1031312' 00:29:47.891 killing process with pid 1031312 00:29:47.891 13:29:58 compress_compdev -- common/autotest_common.sh@969 -- # kill 1031312 00:29:47.891 Received shutdown signal, test time was about 3.000000 seconds 00:29:47.891 00:29:47.891 Latency(us) 00:29:47.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.891 =================================================================================================================== 00:29:47.891 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:47.891 13:29:58 compress_compdev -- common/autotest_common.sh@974 -- # wait 1031312 00:29:50.425 13:30:00 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:29:50.425 13:30:00 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:29:50.425 13:30:00 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1033540 00:29:50.425 13:30:00 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:50.425 13:30:00 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:29:50.425 13:30:00 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1033540 00:29:50.425 13:30:00 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 1033540 ']' 00:29:50.425 13:30:00 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.425 13:30:00 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:50.425 13:30:00 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.425 13:30:00 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:50.425 13:30:00 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:50.425 [2024-07-25 13:30:00.670714] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:29:50.425 [2024-07-25 13:30:00.670782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033540 ] 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:50.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:50.425 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:50.425 [2024-07-25 13:30:00.791133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:50.425 [2024-07-25 13:30:00.878126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.425 [2024-07-25 13:30:00.878132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.361 [2024-07-25 13:30:01.563639] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:29:51.361 13:30:01 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:51.361 13:30:01 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:29:51.361 13:30:01 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:29:51.361 13:30:01 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:51.361 13:30:01 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:54.646 [2024-07-25 13:30:04.711556] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xe35f40 PMD being used: compress_qat 00:29:54.646 13:30:04 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:54.646 13:30:04 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:54.646 13:30:04 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:54.646 13:30:04 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:54.646 13:30:04 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:54.646 13:30:04 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:54.646 13:30:04 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:54.646 13:30:04 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:54.905 [ 00:29:54.905 { 00:29:54.905 "name": "Nvme0n1", 00:29:54.905 "aliases": [ 00:29:54.905 "245e672e-e14d-4fbc-ae06-b54cfcdc1789" 00:29:54.905 ], 00:29:54.906 "product_name": "NVMe disk", 00:29:54.906 "block_size": 512, 00:29:54.906 "num_blocks": 3907029168, 00:29:54.906 "uuid": "245e672e-e14d-4fbc-ae06-b54cfcdc1789", 00:29:54.906 "assigned_rate_limits": { 00:29:54.906 "rw_ios_per_sec": 0, 00:29:54.906 "rw_mbytes_per_sec": 0, 00:29:54.906 "r_mbytes_per_sec": 0, 00:29:54.906 "w_mbytes_per_sec": 0 00:29:54.906 }, 00:29:54.906 "claimed": false, 00:29:54.906 "zoned": false, 00:29:54.906 "supported_io_types": { 00:29:54.906 "read": true, 00:29:54.906 "write": true, 00:29:54.906 "unmap": true, 00:29:54.906 "flush": true, 00:29:54.906 "reset": true, 00:29:54.906 "nvme_admin": true, 00:29:54.906 "nvme_io": true, 00:29:54.906 "nvme_io_md": false, 00:29:54.906 "write_zeroes": true, 00:29:54.906 "zcopy": false, 00:29:54.906 "get_zone_info": false, 00:29:54.906 "zone_management": false, 00:29:54.906 "zone_append": false, 00:29:54.906 "compare": false, 00:29:54.906 "compare_and_write": false, 00:29:54.906 "abort": true, 00:29:54.906 "seek_hole": false, 00:29:54.906 "seek_data": false, 00:29:54.906 "copy": false, 00:29:54.906 "nvme_iov_md": false 00:29:54.906 }, 00:29:54.906 "driver_specific": { 00:29:54.906 "nvme": [ 00:29:54.906 { 00:29:54.906 "pci_address": "0000:d8:00.0", 00:29:54.906 "trid": { 00:29:54.906 "trtype": "PCIe", 00:29:54.906 "traddr": "0000:d8:00.0" 00:29:54.906 }, 00:29:54.906 "ctrlr_data": { 00:29:54.906 "cntlid": 0, 00:29:54.906 "vendor_id": "0x8086", 00:29:54.906 "model_number": "INTEL SSDPE2KX020T8", 00:29:54.906 "serial_number": "BTLJ125505KA2P0BGN", 00:29:54.906 "firmware_revision": "VDV10170", 00:29:54.906 "oacs": { 00:29:54.906 "security": 0, 00:29:54.906 "format": 1, 00:29:54.906 "firmware": 1, 00:29:54.906 "ns_manage": 1 00:29:54.906 }, 00:29:54.906 "multi_ctrlr": false, 00:29:54.906 "ana_reporting": false 00:29:54.906 }, 00:29:54.906 "vs": { 00:29:54.906 "nvme_version": "1.2" 00:29:54.906 }, 00:29:54.906 "ns_data": { 00:29:54.906 "id": 1, 00:29:54.906 "can_share": false 00:29:54.906 } 00:29:54.906 } 00:29:54.906 ], 00:29:54.906 "mp_policy": "active_passive" 00:29:54.906 } 00:29:54.906 } 00:29:54.906 ] 00:29:54.906 13:30:05 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:54.906 13:30:05 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:55.164 [2024-07-25 13:30:05.400766] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc6d1a0 PMD being used: compress_qat 00:29:56.126 06aa0079-3fc4-4341-a3c7-a2375dee3ffe 00:29:56.126 13:30:06 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:56.126 3802011a-7586-4cc3-a5d0-1e0a67ecb173 00:29:56.385 13:30:06 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:56.385 13:30:06 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:56.385 13:30:06 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:56.385 13:30:06 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:56.385 13:30:06 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:56.385 13:30:06 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:56.385 13:30:06 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:56.385 13:30:06 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:56.644 [ 00:29:56.644 { 00:29:56.644 "name": "3802011a-7586-4cc3-a5d0-1e0a67ecb173", 00:29:56.644 "aliases": [ 00:29:56.644 "lvs0/lv0" 00:29:56.644 ], 00:29:56.644 "product_name": "Logical Volume", 00:29:56.644 "block_size": 512, 00:29:56.644 "num_blocks": 204800, 00:29:56.644 "uuid": "3802011a-7586-4cc3-a5d0-1e0a67ecb173", 00:29:56.644 "assigned_rate_limits": { 00:29:56.644 "rw_ios_per_sec": 0, 00:29:56.644 "rw_mbytes_per_sec": 0, 00:29:56.644 "r_mbytes_per_sec": 0, 00:29:56.644 "w_mbytes_per_sec": 0 00:29:56.644 }, 00:29:56.644 "claimed": false, 00:29:56.644 "zoned": false, 00:29:56.644 "supported_io_types": { 00:29:56.644 "read": true, 00:29:56.644 "write": true, 00:29:56.644 "unmap": true, 00:29:56.644 "flush": false, 00:29:56.644 "reset": true, 00:29:56.644 "nvme_admin": false, 00:29:56.644 "nvme_io": false, 00:29:56.644 "nvme_io_md": false, 00:29:56.644 "write_zeroes": true, 00:29:56.644 "zcopy": false, 00:29:56.644 "get_zone_info": false, 00:29:56.644 "zone_management": false, 00:29:56.644 "zone_append": false, 00:29:56.644 "compare": false, 00:29:56.644 "compare_and_write": false, 00:29:56.644 "abort": false, 00:29:56.644 "seek_hole": true, 00:29:56.644 "seek_data": true, 00:29:56.644 "copy": false, 00:29:56.644 "nvme_iov_md": false 00:29:56.644 }, 00:29:56.644 "driver_specific": { 00:29:56.644 "lvol": { 00:29:56.644 "lvol_store_uuid": "06aa0079-3fc4-4341-a3c7-a2375dee3ffe", 00:29:56.644 "base_bdev": "Nvme0n1", 00:29:56.644 "thin_provision": true, 00:29:56.644 "num_allocated_clusters": 0, 00:29:56.644 "snapshot": false, 00:29:56.644 "clone": false, 00:29:56.644 "esnap_clone": false 00:29:56.644 } 00:29:56.644 } 00:29:56.644 } 00:29:56.644 ] 00:29:56.644 13:30:07 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:56.644 13:30:07 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:29:56.644 13:30:07 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:29:56.903 [2024-07-25 13:30:07.301613] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:56.903 COMP_lvs0/lv0 00:29:56.903 13:30:07 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:56.903 13:30:07 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:56.903 13:30:07 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:56.903 13:30:07 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:56.903 13:30:07 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:56.903 13:30:07 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:56.903 13:30:07 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:57.161 13:30:07 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:57.420 [ 00:29:57.420 { 00:29:57.420 "name": "COMP_lvs0/lv0", 00:29:57.420 "aliases": [ 00:29:57.420 "0ad7069b-4eee-5502-be4e-1ea0d5580ba0" 00:29:57.420 ], 00:29:57.420 "product_name": "compress", 00:29:57.420 "block_size": 4096, 00:29:57.420 "num_blocks": 25088, 00:29:57.420 "uuid": "0ad7069b-4eee-5502-be4e-1ea0d5580ba0", 00:29:57.420 "assigned_rate_limits": { 00:29:57.420 "rw_ios_per_sec": 0, 00:29:57.420 "rw_mbytes_per_sec": 0, 00:29:57.420 "r_mbytes_per_sec": 0, 00:29:57.420 "w_mbytes_per_sec": 0 00:29:57.420 }, 00:29:57.420 "claimed": false, 00:29:57.420 "zoned": false, 00:29:57.420 "supported_io_types": { 00:29:57.420 "read": true, 00:29:57.420 "write": true, 00:29:57.420 "unmap": false, 00:29:57.420 "flush": false, 00:29:57.420 "reset": false, 00:29:57.420 "nvme_admin": false, 00:29:57.420 "nvme_io": false, 00:29:57.420 "nvme_io_md": false, 00:29:57.420 "write_zeroes": true, 00:29:57.420 "zcopy": false, 00:29:57.420 "get_zone_info": false, 00:29:57.420 "zone_management": false, 00:29:57.420 "zone_append": false, 00:29:57.420 "compare": false, 00:29:57.420 "compare_and_write": false, 00:29:57.420 "abort": false, 00:29:57.420 "seek_hole": false, 00:29:57.420 "seek_data": false, 00:29:57.420 "copy": false, 00:29:57.420 "nvme_iov_md": false 00:29:57.420 }, 00:29:57.420 "driver_specific": { 00:29:57.420 "compress": { 00:29:57.420 "name": "COMP_lvs0/lv0", 00:29:57.420 "base_bdev_name": "3802011a-7586-4cc3-a5d0-1e0a67ecb173", 00:29:57.420 "pm_path": "/tmp/pmem/1898dc88-29d7-48b1-aef8-1c6527783692" 00:29:57.420 } 00:29:57.420 } 00:29:57.420 } 00:29:57.420 ] 00:29:57.420 13:30:07 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:57.420 13:30:07 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:57.420 [2024-07-25 13:30:07.871870] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f8af01b1600 PMD being used: compress_qat 00:29:57.420 [2024-07-25 13:30:07.874022] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xe32880 PMD being used: compress_qat 00:29:57.420 Running I/O for 3 seconds... 00:30:00.709 00:30:00.709 Latency(us) 00:30:00.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.709 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:00.709 Verification LBA range: start 0x0 length 0x3100 00:30:00.709 COMP_lvs0/lv0 : 3.01 4064.71 15.88 0.00 0.00 7818.91 173.67 12635.34 00:30:00.709 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:00.709 Verification LBA range: start 0x3100 length 0x3100 00:30:00.709 COMP_lvs0/lv0 : 3.01 4131.90 16.14 0.00 0.00 7700.57 167.12 12582.91 00:30:00.709 =================================================================================================================== 00:30:00.709 Total : 8196.61 32.02 0.00 0.00 7759.25 167.12 12635.34 00:30:00.709 0 00:30:00.709 13:30:10 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:30:00.709 13:30:10 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:00.709 13:30:11 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:00.968 13:30:11 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:00.968 13:30:11 compress_compdev -- compress/compress.sh@78 -- # killprocess 1033540 00:30:00.968 13:30:11 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 1033540 ']' 00:30:00.968 13:30:11 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 1033540 00:30:00.968 13:30:11 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:30:00.968 13:30:11 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:00.968 13:30:11 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1033540 00:30:00.968 13:30:11 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:00.968 13:30:11 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:00.968 13:30:11 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1033540' 00:30:00.968 killing process with pid 1033540 00:30:00.968 13:30:11 compress_compdev -- common/autotest_common.sh@969 -- # kill 1033540 00:30:00.968 Received shutdown signal, test time was about 3.000000 seconds 00:30:00.968 00:30:00.968 Latency(us) 00:30:00.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.968 =================================================================================================================== 00:30:00.968 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:00.968 13:30:11 compress_compdev -- common/autotest_common.sh@974 -- # wait 1033540 00:30:03.503 13:30:13 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:30:03.503 13:30:13 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:30:03.503 13:30:13 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=1036366 00:30:03.503 13:30:13 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:30:03.503 13:30:13 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:03.503 13:30:13 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 1036366 00:30:03.503 13:30:13 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 1036366 ']' 00:30:03.503 13:30:13 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.503 13:30:13 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:03.503 13:30:13 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.503 13:30:13 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:03.503 13:30:13 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:30:03.503 [2024-07-25 13:30:13.898237] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:30:03.503 [2024-07-25 13:30:13.898299] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036366 ] 00:30:03.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.503 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:03.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.503 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:03.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.503 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:03.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.503 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:03.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:03.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:03.504 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:03.763 [2024-07-25 13:30:14.021597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:03.763 [2024-07-25 13:30:14.109077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.763 [2024-07-25 13:30:14.109171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.763 [2024-07-25 13:30:14.109176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.331 [2024-07-25 13:30:14.784943] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:30:04.590 13:30:14 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:04.590 13:30:14 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:30:04.590 13:30:14 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:30:04.590 13:30:14 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:04.590 13:30:14 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:07.876 [2024-07-25 13:30:17.944456] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x187aae0 PMD being used: compress_qat 00:30:07.876 13:30:17 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:07.876 13:30:17 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:30:07.876 13:30:17 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:07.876 13:30:17 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:30:07.876 13:30:17 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:07.876 13:30:17 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:07.877 13:30:17 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:07.877 13:30:18 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:08.135 [ 00:30:08.135 { 00:30:08.135 "name": "Nvme0n1", 00:30:08.135 "aliases": [ 00:30:08.135 "1958e4c2-f133-47a8-8b8f-bca61bcff952" 00:30:08.135 ], 00:30:08.135 "product_name": "NVMe disk", 00:30:08.135 "block_size": 512, 00:30:08.135 "num_blocks": 3907029168, 00:30:08.135 "uuid": "1958e4c2-f133-47a8-8b8f-bca61bcff952", 00:30:08.135 "assigned_rate_limits": { 00:30:08.135 "rw_ios_per_sec": 0, 00:30:08.135 "rw_mbytes_per_sec": 0, 00:30:08.135 "r_mbytes_per_sec": 0, 00:30:08.135 "w_mbytes_per_sec": 0 00:30:08.135 }, 00:30:08.135 "claimed": false, 00:30:08.135 "zoned": false, 00:30:08.135 "supported_io_types": { 00:30:08.135 "read": true, 00:30:08.135 "write": true, 00:30:08.135 "unmap": true, 00:30:08.135 "flush": true, 00:30:08.135 "reset": true, 00:30:08.135 "nvme_admin": true, 00:30:08.135 "nvme_io": true, 00:30:08.135 "nvme_io_md": false, 00:30:08.135 "write_zeroes": true, 00:30:08.135 "zcopy": false, 00:30:08.135 "get_zone_info": false, 00:30:08.135 "zone_management": false, 00:30:08.135 "zone_append": false, 00:30:08.135 "compare": false, 00:30:08.135 "compare_and_write": false, 00:30:08.135 "abort": true, 00:30:08.135 "seek_hole": false, 00:30:08.135 "seek_data": false, 00:30:08.135 "copy": false, 00:30:08.135 "nvme_iov_md": false 00:30:08.135 }, 00:30:08.135 "driver_specific": { 00:30:08.135 "nvme": [ 00:30:08.135 { 00:30:08.135 "pci_address": "0000:d8:00.0", 00:30:08.135 "trid": { 00:30:08.135 "trtype": "PCIe", 00:30:08.135 "traddr": "0000:d8:00.0" 00:30:08.136 }, 00:30:08.136 "ctrlr_data": { 00:30:08.136 "cntlid": 0, 00:30:08.136 "vendor_id": "0x8086", 00:30:08.136 "model_number": "INTEL SSDPE2KX020T8", 00:30:08.136 "serial_number": "BTLJ125505KA2P0BGN", 00:30:08.136 "firmware_revision": "VDV10170", 00:30:08.136 "oacs": { 00:30:08.136 "security": 0, 00:30:08.136 "format": 1, 00:30:08.136 "firmware": 1, 00:30:08.136 "ns_manage": 1 00:30:08.136 }, 00:30:08.136 "multi_ctrlr": false, 00:30:08.136 "ana_reporting": false 00:30:08.136 }, 00:30:08.136 "vs": { 00:30:08.136 "nvme_version": "1.2" 00:30:08.136 }, 00:30:08.136 "ns_data": { 00:30:08.136 "id": 1, 00:30:08.136 "can_share": false 00:30:08.136 } 00:30:08.136 } 00:30:08.136 ], 00:30:08.136 "mp_policy": "active_passive" 00:30:08.136 } 00:30:08.136 } 00:30:08.136 ] 00:30:08.136 13:30:18 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:30:08.136 13:30:18 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:08.395 [2024-07-25 13:30:18.658584] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x187cfa0 PMD being used: compress_qat 00:30:09.331 b24269b9-27d9-4c56-a068-4939b4adbf39 00:30:09.331 13:30:19 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:09.590 8ec9c321-18a3-4632-8a67-8602e30454e6 00:30:09.590 13:30:19 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:09.590 13:30:19 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:30:09.590 13:30:19 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:09.590 13:30:19 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:30:09.590 13:30:19 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:09.590 13:30:19 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:09.590 13:30:19 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:09.849 13:30:20 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:10.107 [ 00:30:10.107 { 00:30:10.107 "name": "8ec9c321-18a3-4632-8a67-8602e30454e6", 00:30:10.107 "aliases": [ 00:30:10.107 "lvs0/lv0" 00:30:10.107 ], 00:30:10.107 "product_name": "Logical Volume", 00:30:10.107 "block_size": 512, 00:30:10.107 "num_blocks": 204800, 00:30:10.107 "uuid": "8ec9c321-18a3-4632-8a67-8602e30454e6", 00:30:10.107 "assigned_rate_limits": { 00:30:10.107 "rw_ios_per_sec": 0, 00:30:10.107 "rw_mbytes_per_sec": 0, 00:30:10.107 "r_mbytes_per_sec": 0, 00:30:10.107 "w_mbytes_per_sec": 0 00:30:10.107 }, 00:30:10.107 "claimed": false, 00:30:10.108 "zoned": false, 00:30:10.108 "supported_io_types": { 00:30:10.108 "read": true, 00:30:10.108 "write": true, 00:30:10.108 "unmap": true, 00:30:10.108 "flush": false, 00:30:10.108 "reset": true, 00:30:10.108 "nvme_admin": false, 00:30:10.108 "nvme_io": false, 00:30:10.108 "nvme_io_md": false, 00:30:10.108 "write_zeroes": true, 00:30:10.108 "zcopy": false, 00:30:10.108 "get_zone_info": false, 00:30:10.108 "zone_management": false, 00:30:10.108 "zone_append": false, 00:30:10.108 "compare": false, 00:30:10.108 "compare_and_write": false, 00:30:10.108 "abort": false, 00:30:10.108 "seek_hole": true, 00:30:10.108 "seek_data": true, 00:30:10.108 "copy": false, 00:30:10.108 "nvme_iov_md": false 00:30:10.108 }, 00:30:10.108 "driver_specific": { 00:30:10.108 "lvol": { 00:30:10.108 "lvol_store_uuid": "b24269b9-27d9-4c56-a068-4939b4adbf39", 00:30:10.108 "base_bdev": "Nvme0n1", 00:30:10.108 "thin_provision": true, 00:30:10.108 "num_allocated_clusters": 0, 00:30:10.108 "snapshot": false, 00:30:10.108 "clone": false, 00:30:10.108 "esnap_clone": false 00:30:10.108 } 00:30:10.108 } 00:30:10.108 } 00:30:10.108 ] 00:30:10.108 13:30:20 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:30:10.108 13:30:20 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:30:10.108 13:30:20 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:30:10.367 [2024-07-25 13:30:20.652883] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:10.367 COMP_lvs0/lv0 00:30:10.367 13:30:20 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:10.367 13:30:20 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:30:10.367 13:30:20 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:10.367 13:30:20 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:30:10.367 13:30:20 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:10.367 13:30:20 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:10.367 13:30:20 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:10.626 13:30:20 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:10.626 [ 00:30:10.626 { 00:30:10.626 "name": "COMP_lvs0/lv0", 00:30:10.626 "aliases": [ 00:30:10.626 "7f9ea9b3-c6a8-5059-8bba-26c487c55195" 00:30:10.626 ], 00:30:10.626 "product_name": "compress", 00:30:10.626 "block_size": 512, 00:30:10.626 "num_blocks": 200704, 00:30:10.626 "uuid": "7f9ea9b3-c6a8-5059-8bba-26c487c55195", 00:30:10.626 "assigned_rate_limits": { 00:30:10.626 "rw_ios_per_sec": 0, 00:30:10.626 "rw_mbytes_per_sec": 0, 00:30:10.626 "r_mbytes_per_sec": 0, 00:30:10.626 "w_mbytes_per_sec": 0 00:30:10.626 }, 00:30:10.626 "claimed": false, 00:30:10.626 "zoned": false, 00:30:10.626 "supported_io_types": { 00:30:10.626 "read": true, 00:30:10.626 "write": true, 00:30:10.626 "unmap": false, 00:30:10.626 "flush": false, 00:30:10.626 "reset": false, 00:30:10.626 "nvme_admin": false, 00:30:10.626 "nvme_io": false, 00:30:10.626 "nvme_io_md": false, 00:30:10.626 "write_zeroes": true, 00:30:10.626 "zcopy": false, 00:30:10.626 "get_zone_info": false, 00:30:10.626 "zone_management": false, 00:30:10.626 "zone_append": false, 00:30:10.626 "compare": false, 00:30:10.626 "compare_and_write": false, 00:30:10.626 "abort": false, 00:30:10.626 "seek_hole": false, 00:30:10.626 "seek_data": false, 00:30:10.626 "copy": false, 00:30:10.626 "nvme_iov_md": false 00:30:10.626 }, 00:30:10.626 "driver_specific": { 00:30:10.626 "compress": { 00:30:10.626 "name": "COMP_lvs0/lv0", 00:30:10.626 "base_bdev_name": "8ec9c321-18a3-4632-8a67-8602e30454e6", 00:30:10.626 "pm_path": "/tmp/pmem/f81cb051-e2a6-491d-9d57-f67a5fcab169" 00:30:10.626 } 00:30:10.626 } 00:30:10.626 } 00:30:10.626 ] 00:30:10.626 13:30:21 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:30:10.626 13:30:21 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:10.884 [2024-07-25 13:30:21.201780] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f7ed01b1390 PMD being used: compress_qat 00:30:10.884 I/O targets: 00:30:10.884 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:30:10.884 00:30:10.884 00:30:10.884 CUnit - A unit testing framework for C - Version 2.1-3 00:30:10.884 http://cunit.sourceforge.net/ 00:30:10.884 00:30:10.884 00:30:10.884 Suite: bdevio tests on: COMP_lvs0/lv0 00:30:10.884 Test: blockdev write read block ...passed 00:30:10.884 Test: blockdev write zeroes read block ...passed 00:30:10.884 Test: blockdev write zeroes read no split ...passed 00:30:10.884 Test: blockdev write zeroes read split ...passed 00:30:10.884 Test: blockdev write zeroes read split partial ...passed 00:30:10.884 Test: blockdev reset ...[2024-07-25 13:30:21.258814] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:30:10.884 passed 00:30:10.884 Test: blockdev write read 8 blocks ...passed 00:30:10.884 Test: blockdev write read size > 128k ...passed 00:30:10.884 Test: blockdev write read invalid size ...passed 00:30:10.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:10.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:10.884 Test: blockdev write read max offset ...passed 00:30:10.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:10.884 Test: blockdev writev readv 8 blocks ...passed 00:30:10.885 Test: blockdev writev readv 30 x 1block ...passed 00:30:10.885 Test: blockdev writev readv block ...passed 00:30:10.885 Test: blockdev writev readv size > 128k ...passed 00:30:10.885 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:10.885 Test: blockdev comparev and writev ...passed 00:30:10.885 Test: blockdev nvme passthru rw ...passed 00:30:10.885 Test: blockdev nvme passthru vendor specific ...passed 00:30:10.885 Test: blockdev nvme admin passthru ...passed 00:30:10.885 Test: blockdev copy ...passed 00:30:10.885 00:30:10.885 Run Summary: Type Total Ran Passed Failed Inactive 00:30:10.885 suites 1 1 n/a 0 0 00:30:10.885 tests 23 23 23 0 0 00:30:10.885 asserts 130 130 130 0 n/a 00:30:10.885 00:30:10.885 Elapsed time = 0.145 seconds 00:30:10.885 0 00:30:10.885 13:30:21 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:30:10.885 13:30:21 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:11.143 13:30:21 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:11.401 13:30:21 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:30:11.401 13:30:21 compress_compdev -- compress/compress.sh@62 -- # killprocess 1036366 00:30:11.401 13:30:21 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 1036366 ']' 00:30:11.401 13:30:21 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 1036366 00:30:11.401 13:30:21 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:30:11.401 13:30:21 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:11.401 13:30:21 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1036366 00:30:11.401 13:30:21 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:11.401 13:30:21 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:11.401 13:30:21 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1036366' 00:30:11.401 killing process with pid 1036366 00:30:11.401 13:30:21 compress_compdev -- common/autotest_common.sh@969 -- # kill 1036366 00:30:11.401 13:30:21 compress_compdev -- common/autotest_common.sh@974 -- # wait 1036366 00:30:13.976 13:30:24 compress_compdev -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:30:13.976 13:30:24 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:30:13.976 00:30:13.976 real 0m50.283s 00:30:13.976 user 1m53.764s 00:30:13.976 sys 0m5.450s 00:30:13.976 13:30:24 compress_compdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:13.976 13:30:24 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:30:13.976 ************************************ 00:30:13.976 END TEST compress_compdev 00:30:13.976 ************************************ 00:30:13.976 13:30:24 -- spdk/autotest.sh@353 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:30:13.976 13:30:24 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:13.976 13:30:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:13.976 13:30:24 -- common/autotest_common.sh@10 -- # set +x 00:30:13.976 ************************************ 00:30:13.976 START TEST compress_isal 00:30:13.976 ************************************ 00:30:13.976 13:30:24 compress_isal -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:30:14.353 * Looking for test storage... 00:30:14.353 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:30:14.353 13:30:24 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:14.353 13:30:24 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.353 13:30:24 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.353 13:30:24 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.353 13:30:24 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.353 13:30:24 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.353 13:30:24 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.353 13:30:24 compress_isal -- paths/export.sh@5 -- # export PATH 00:30:14.353 13:30:24 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@47 -- # : 0 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:14.353 13:30:24 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:14.353 13:30:24 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:14.353 13:30:24 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:30:14.353 13:30:24 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:30:14.353 13:30:24 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:30:14.353 13:30:24 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:30:14.353 13:30:24 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1038054 00:30:14.353 13:30:24 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:14.353 13:30:24 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1038054 00:30:14.353 13:30:24 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 1038054 ']' 00:30:14.353 13:30:24 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:30:14.353 13:30:24 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.353 13:30:24 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:14.353 13:30:24 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.353 13:30:24 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:14.354 13:30:24 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:30:14.354 [2024-07-25 13:30:24.601927] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:30:14.354 [2024-07-25 13:30:24.601989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1038054 ] 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:14.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:14.354 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:14.354 [2024-07-25 13:30:24.722078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:14.354 [2024-07-25 13:30:24.808176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.354 [2024-07-25 13:30:24.808182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.290 13:30:25 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:15.290 13:30:25 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:30:15.290 13:30:25 compress_isal -- compress/compress.sh@74 -- # create_vols 00:30:15.290 13:30:25 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:15.290 13:30:25 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:18.576 13:30:28 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:18.576 13:30:28 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:30:18.576 13:30:28 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:18.576 13:30:28 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:18.576 13:30:28 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:18.576 13:30:28 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:18.576 13:30:28 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:18.577 13:30:28 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:18.577 [ 00:30:18.577 { 00:30:18.577 "name": "Nvme0n1", 00:30:18.577 "aliases": [ 00:30:18.577 "744b6f5e-c231-43d3-a641-5e6cdb1ab745" 00:30:18.577 ], 00:30:18.577 "product_name": "NVMe disk", 00:30:18.577 "block_size": 512, 00:30:18.577 "num_blocks": 3907029168, 00:30:18.577 "uuid": "744b6f5e-c231-43d3-a641-5e6cdb1ab745", 00:30:18.577 "assigned_rate_limits": { 00:30:18.577 "rw_ios_per_sec": 0, 00:30:18.577 "rw_mbytes_per_sec": 0, 00:30:18.577 "r_mbytes_per_sec": 0, 00:30:18.577 "w_mbytes_per_sec": 0 00:30:18.577 }, 00:30:18.577 "claimed": false, 00:30:18.577 "zoned": false, 00:30:18.577 "supported_io_types": { 00:30:18.577 "read": true, 00:30:18.577 "write": true, 00:30:18.577 "unmap": true, 00:30:18.577 "flush": true, 00:30:18.577 "reset": true, 00:30:18.577 "nvme_admin": true, 00:30:18.577 "nvme_io": true, 00:30:18.577 "nvme_io_md": false, 00:30:18.577 "write_zeroes": true, 00:30:18.577 "zcopy": false, 00:30:18.577 "get_zone_info": false, 00:30:18.577 "zone_management": false, 00:30:18.577 "zone_append": false, 00:30:18.577 "compare": false, 00:30:18.577 "compare_and_write": false, 00:30:18.577 "abort": true, 00:30:18.577 "seek_hole": false, 00:30:18.577 "seek_data": false, 00:30:18.577 "copy": false, 00:30:18.577 "nvme_iov_md": false 00:30:18.577 }, 00:30:18.577 "driver_specific": { 00:30:18.577 "nvme": [ 00:30:18.577 { 00:30:18.577 "pci_address": "0000:d8:00.0", 00:30:18.577 "trid": { 00:30:18.577 "trtype": "PCIe", 00:30:18.577 "traddr": "0000:d8:00.0" 00:30:18.577 }, 00:30:18.577 "ctrlr_data": { 00:30:18.577 "cntlid": 0, 00:30:18.577 "vendor_id": "0x8086", 00:30:18.577 "model_number": "INTEL SSDPE2KX020T8", 00:30:18.577 "serial_number": "BTLJ125505KA2P0BGN", 00:30:18.577 "firmware_revision": "VDV10170", 00:30:18.577 "oacs": { 00:30:18.577 "security": 0, 00:30:18.577 "format": 1, 00:30:18.577 "firmware": 1, 00:30:18.577 "ns_manage": 1 00:30:18.577 }, 00:30:18.577 "multi_ctrlr": false, 00:30:18.577 "ana_reporting": false 00:30:18.577 }, 00:30:18.577 "vs": { 00:30:18.577 "nvme_version": "1.2" 00:30:18.577 }, 00:30:18.577 "ns_data": { 00:30:18.577 "id": 1, 00:30:18.577 "can_share": false 00:30:18.577 } 00:30:18.577 } 00:30:18.577 ], 00:30:18.577 "mp_policy": "active_passive" 00:30:18.577 } 00:30:18.577 } 00:30:18.577 ] 00:30:18.836 13:30:29 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:18.836 13:30:29 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:19.773 4c9b879c-0a67-415a-a15d-11af519f81a5 00:30:20.032 13:30:30 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:20.032 623f92aa-4d96-4aa8-87bd-42fd95ca2e5c 00:30:20.032 13:30:30 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:20.032 13:30:30 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:30:20.032 13:30:30 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:20.032 13:30:30 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:20.032 13:30:30 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:20.032 13:30:30 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:20.032 13:30:30 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:20.291 13:30:30 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:20.550 [ 00:30:20.550 { 00:30:20.550 "name": "623f92aa-4d96-4aa8-87bd-42fd95ca2e5c", 00:30:20.550 "aliases": [ 00:30:20.550 "lvs0/lv0" 00:30:20.550 ], 00:30:20.550 "product_name": "Logical Volume", 00:30:20.550 "block_size": 512, 00:30:20.550 "num_blocks": 204800, 00:30:20.550 "uuid": "623f92aa-4d96-4aa8-87bd-42fd95ca2e5c", 00:30:20.550 "assigned_rate_limits": { 00:30:20.550 "rw_ios_per_sec": 0, 00:30:20.550 "rw_mbytes_per_sec": 0, 00:30:20.550 "r_mbytes_per_sec": 0, 00:30:20.550 "w_mbytes_per_sec": 0 00:30:20.550 }, 00:30:20.550 "claimed": false, 00:30:20.550 "zoned": false, 00:30:20.550 "supported_io_types": { 00:30:20.550 "read": true, 00:30:20.550 "write": true, 00:30:20.550 "unmap": true, 00:30:20.550 "flush": false, 00:30:20.550 "reset": true, 00:30:20.550 "nvme_admin": false, 00:30:20.550 "nvme_io": false, 00:30:20.550 "nvme_io_md": false, 00:30:20.550 "write_zeroes": true, 00:30:20.550 "zcopy": false, 00:30:20.550 "get_zone_info": false, 00:30:20.550 "zone_management": false, 00:30:20.550 "zone_append": false, 00:30:20.550 "compare": false, 00:30:20.550 "compare_and_write": false, 00:30:20.550 "abort": false, 00:30:20.550 "seek_hole": true, 00:30:20.550 "seek_data": true, 00:30:20.550 "copy": false, 00:30:20.550 "nvme_iov_md": false 00:30:20.550 }, 00:30:20.550 "driver_specific": { 00:30:20.550 "lvol": { 00:30:20.550 "lvol_store_uuid": "4c9b879c-0a67-415a-a15d-11af519f81a5", 00:30:20.550 "base_bdev": "Nvme0n1", 00:30:20.550 "thin_provision": true, 00:30:20.550 "num_allocated_clusters": 0, 00:30:20.550 "snapshot": false, 00:30:20.550 "clone": false, 00:30:20.550 "esnap_clone": false 00:30:20.550 } 00:30:20.550 } 00:30:20.550 } 00:30:20.550 ] 00:30:20.550 13:30:30 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:20.550 13:30:30 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:30:20.550 13:30:30 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:30:20.808 [2024-07-25 13:30:31.139177] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:20.808 COMP_lvs0/lv0 00:30:20.808 13:30:31 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:20.808 13:30:31 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:30:20.808 13:30:31 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:20.808 13:30:31 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:20.808 13:30:31 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:20.808 13:30:31 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:20.808 13:30:31 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:21.067 13:30:31 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:21.325 [ 00:30:21.325 { 00:30:21.325 "name": "COMP_lvs0/lv0", 00:30:21.325 "aliases": [ 00:30:21.325 "f9e2fe01-a04a-51ce-9ae9-6a9ecd5d3d35" 00:30:21.325 ], 00:30:21.325 "product_name": "compress", 00:30:21.325 "block_size": 512, 00:30:21.325 "num_blocks": 200704, 00:30:21.325 "uuid": "f9e2fe01-a04a-51ce-9ae9-6a9ecd5d3d35", 00:30:21.325 "assigned_rate_limits": { 00:30:21.325 "rw_ios_per_sec": 0, 00:30:21.325 "rw_mbytes_per_sec": 0, 00:30:21.325 "r_mbytes_per_sec": 0, 00:30:21.325 "w_mbytes_per_sec": 0 00:30:21.325 }, 00:30:21.325 "claimed": false, 00:30:21.325 "zoned": false, 00:30:21.325 "supported_io_types": { 00:30:21.325 "read": true, 00:30:21.325 "write": true, 00:30:21.325 "unmap": false, 00:30:21.325 "flush": false, 00:30:21.325 "reset": false, 00:30:21.325 "nvme_admin": false, 00:30:21.325 "nvme_io": false, 00:30:21.325 "nvme_io_md": false, 00:30:21.325 "write_zeroes": true, 00:30:21.325 "zcopy": false, 00:30:21.325 "get_zone_info": false, 00:30:21.325 "zone_management": false, 00:30:21.325 "zone_append": false, 00:30:21.325 "compare": false, 00:30:21.325 "compare_and_write": false, 00:30:21.325 "abort": false, 00:30:21.325 "seek_hole": false, 00:30:21.325 "seek_data": false, 00:30:21.325 "copy": false, 00:30:21.325 "nvme_iov_md": false 00:30:21.325 }, 00:30:21.325 "driver_specific": { 00:30:21.325 "compress": { 00:30:21.325 "name": "COMP_lvs0/lv0", 00:30:21.326 "base_bdev_name": "623f92aa-4d96-4aa8-87bd-42fd95ca2e5c", 00:30:21.326 "pm_path": "/tmp/pmem/72e8aaa6-e41f-42ae-9e44-d84de7730797" 00:30:21.326 } 00:30:21.326 } 00:30:21.326 } 00:30:21.326 ] 00:30:21.326 13:30:31 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:21.326 13:30:31 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:21.326 Running I/O for 3 seconds... 00:30:24.614 00:30:24.614 Latency(us) 00:30:24.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.614 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:24.614 Verification LBA range: start 0x0 length 0x3100 00:30:24.614 COMP_lvs0/lv0 : 3.01 3502.57 13.68 0.00 0.00 9079.18 58.98 14575.21 00:30:24.614 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:24.614 Verification LBA range: start 0x3100 length 0x3100 00:30:24.614 COMP_lvs0/lv0 : 3.01 3522.71 13.76 0.00 0.00 9037.17 54.89 14575.21 00:30:24.614 =================================================================================================================== 00:30:24.614 Total : 7025.28 27.44 0.00 0.00 9058.12 54.89 14575.21 00:30:24.614 0 00:30:24.614 13:30:34 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:30:24.614 13:30:34 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:24.614 13:30:34 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:24.873 13:30:35 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:24.873 13:30:35 compress_isal -- compress/compress.sh@78 -- # killprocess 1038054 00:30:24.873 13:30:35 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 1038054 ']' 00:30:24.873 13:30:35 compress_isal -- common/autotest_common.sh@954 -- # kill -0 1038054 00:30:24.873 13:30:35 compress_isal -- common/autotest_common.sh@955 -- # uname 00:30:24.873 13:30:35 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:24.873 13:30:35 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1038054 00:30:24.873 13:30:35 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:24.873 13:30:35 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:24.873 13:30:35 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1038054' 00:30:24.873 killing process with pid 1038054 00:30:24.873 13:30:35 compress_isal -- common/autotest_common.sh@969 -- # kill 1038054 00:30:24.873 Received shutdown signal, test time was about 3.000000 seconds 00:30:24.873 00:30:24.873 Latency(us) 00:30:24.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.873 =================================================================================================================== 00:30:24.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:24.873 13:30:35 compress_isal -- common/autotest_common.sh@974 -- # wait 1038054 00:30:27.450 13:30:37 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:30:27.450 13:30:37 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:30:27.450 13:30:37 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1040288 00:30:27.450 13:30:37 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:27.450 13:30:37 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:30:27.450 13:30:37 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1040288 00:30:27.450 13:30:37 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 1040288 ']' 00:30:27.450 13:30:37 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.450 13:30:37 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:27.450 13:30:37 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.450 13:30:37 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:27.450 13:30:37 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:30:27.450 [2024-07-25 13:30:37.753354] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:30:27.450 [2024-07-25 13:30:37.753418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040288 ] 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:27.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.450 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:27.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:27.451 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:27.451 [2024-07-25 13:30:37.875896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:27.710 [2024-07-25 13:30:37.960128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:27.710 [2024-07-25 13:30:37.960134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.278 13:30:38 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:28.278 13:30:38 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:30:28.278 13:30:38 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:30:28.278 13:30:38 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:28.278 13:30:38 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:31.565 13:30:41 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:31.565 13:30:41 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:30:31.565 13:30:41 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:31.565 13:30:41 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:31.565 13:30:41 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:31.565 13:30:41 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:31.565 13:30:41 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:31.565 13:30:41 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:31.823 [ 00:30:31.823 { 00:30:31.823 "name": "Nvme0n1", 00:30:31.823 "aliases": [ 00:30:31.823 "9684b971-d631-4724-86f2-7f3f96797bdc" 00:30:31.823 ], 00:30:31.823 "product_name": "NVMe disk", 00:30:31.823 "block_size": 512, 00:30:31.823 "num_blocks": 3907029168, 00:30:31.823 "uuid": "9684b971-d631-4724-86f2-7f3f96797bdc", 00:30:31.823 "assigned_rate_limits": { 00:30:31.823 "rw_ios_per_sec": 0, 00:30:31.823 "rw_mbytes_per_sec": 0, 00:30:31.823 "r_mbytes_per_sec": 0, 00:30:31.823 "w_mbytes_per_sec": 0 00:30:31.823 }, 00:30:31.823 "claimed": false, 00:30:31.823 "zoned": false, 00:30:31.823 "supported_io_types": { 00:30:31.823 "read": true, 00:30:31.823 "write": true, 00:30:31.823 "unmap": true, 00:30:31.823 "flush": true, 00:30:31.823 "reset": true, 00:30:31.823 "nvme_admin": true, 00:30:31.823 "nvme_io": true, 00:30:31.823 "nvme_io_md": false, 00:30:31.823 "write_zeroes": true, 00:30:31.823 "zcopy": false, 00:30:31.823 "get_zone_info": false, 00:30:31.823 "zone_management": false, 00:30:31.823 "zone_append": false, 00:30:31.823 "compare": false, 00:30:31.823 "compare_and_write": false, 00:30:31.823 "abort": true, 00:30:31.823 "seek_hole": false, 00:30:31.823 "seek_data": false, 00:30:31.823 "copy": false, 00:30:31.823 "nvme_iov_md": false 00:30:31.823 }, 00:30:31.823 "driver_specific": { 00:30:31.823 "nvme": [ 00:30:31.823 { 00:30:31.823 "pci_address": "0000:d8:00.0", 00:30:31.823 "trid": { 00:30:31.823 "trtype": "PCIe", 00:30:31.823 "traddr": "0000:d8:00.0" 00:30:31.823 }, 00:30:31.823 "ctrlr_data": { 00:30:31.823 "cntlid": 0, 00:30:31.823 "vendor_id": "0x8086", 00:30:31.823 "model_number": "INTEL SSDPE2KX020T8", 00:30:31.823 "serial_number": "BTLJ125505KA2P0BGN", 00:30:31.823 "firmware_revision": "VDV10170", 00:30:31.823 "oacs": { 00:30:31.823 "security": 0, 00:30:31.823 "format": 1, 00:30:31.823 "firmware": 1, 00:30:31.823 "ns_manage": 1 00:30:31.823 }, 00:30:31.823 "multi_ctrlr": false, 00:30:31.823 "ana_reporting": false 00:30:31.823 }, 00:30:31.823 "vs": { 00:30:31.823 "nvme_version": "1.2" 00:30:31.823 }, 00:30:31.823 "ns_data": { 00:30:31.823 "id": 1, 00:30:31.823 "can_share": false 00:30:31.823 } 00:30:31.823 } 00:30:31.823 ], 00:30:31.823 "mp_policy": "active_passive" 00:30:31.823 } 00:30:31.823 } 00:30:31.823 ] 00:30:31.824 13:30:42 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:31.824 13:30:42 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:33.246 41bd6b5b-760e-4a73-9c64-06dd98f65695 00:30:33.246 13:30:43 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:33.246 6c61b180-543d-4d36-ab51-48c63aff372d 00:30:33.246 13:30:43 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:33.246 13:30:43 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:30:33.246 13:30:43 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:33.246 13:30:43 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:33.246 13:30:43 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:33.246 13:30:43 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:33.246 13:30:43 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:33.505 13:30:43 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:33.763 [ 00:30:33.763 { 00:30:33.763 "name": "6c61b180-543d-4d36-ab51-48c63aff372d", 00:30:33.763 "aliases": [ 00:30:33.763 "lvs0/lv0" 00:30:33.763 ], 00:30:33.763 "product_name": "Logical Volume", 00:30:33.763 "block_size": 512, 00:30:33.763 "num_blocks": 204800, 00:30:33.763 "uuid": "6c61b180-543d-4d36-ab51-48c63aff372d", 00:30:33.763 "assigned_rate_limits": { 00:30:33.763 "rw_ios_per_sec": 0, 00:30:33.763 "rw_mbytes_per_sec": 0, 00:30:33.763 "r_mbytes_per_sec": 0, 00:30:33.763 "w_mbytes_per_sec": 0 00:30:33.763 }, 00:30:33.763 "claimed": false, 00:30:33.763 "zoned": false, 00:30:33.763 "supported_io_types": { 00:30:33.763 "read": true, 00:30:33.763 "write": true, 00:30:33.763 "unmap": true, 00:30:33.763 "flush": false, 00:30:33.763 "reset": true, 00:30:33.763 "nvme_admin": false, 00:30:33.763 "nvme_io": false, 00:30:33.763 "nvme_io_md": false, 00:30:33.763 "write_zeroes": true, 00:30:33.763 "zcopy": false, 00:30:33.763 "get_zone_info": false, 00:30:33.763 "zone_management": false, 00:30:33.763 "zone_append": false, 00:30:33.763 "compare": false, 00:30:33.763 "compare_and_write": false, 00:30:33.763 "abort": false, 00:30:33.763 "seek_hole": true, 00:30:33.763 "seek_data": true, 00:30:33.763 "copy": false, 00:30:33.763 "nvme_iov_md": false 00:30:33.763 }, 00:30:33.763 "driver_specific": { 00:30:33.763 "lvol": { 00:30:33.763 "lvol_store_uuid": "41bd6b5b-760e-4a73-9c64-06dd98f65695", 00:30:33.763 "base_bdev": "Nvme0n1", 00:30:33.763 "thin_provision": true, 00:30:33.763 "num_allocated_clusters": 0, 00:30:33.763 "snapshot": false, 00:30:33.763 "clone": false, 00:30:33.763 "esnap_clone": false 00:30:33.763 } 00:30:33.763 } 00:30:33.763 } 00:30:33.763 ] 00:30:33.763 13:30:44 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:33.763 13:30:44 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:30:33.763 13:30:44 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:30:34.022 [2024-07-25 13:30:44.385355] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:34.022 COMP_lvs0/lv0 00:30:34.022 13:30:44 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:34.022 13:30:44 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:30:34.022 13:30:44 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:34.022 13:30:44 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:34.022 13:30:44 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:34.022 13:30:44 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:34.022 13:30:44 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:34.281 13:30:44 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:34.538 [ 00:30:34.538 { 00:30:34.538 "name": "COMP_lvs0/lv0", 00:30:34.538 "aliases": [ 00:30:34.538 "18eb79c0-08fe-53b0-82ce-5c9f42c0bf17" 00:30:34.538 ], 00:30:34.538 "product_name": "compress", 00:30:34.538 "block_size": 512, 00:30:34.538 "num_blocks": 200704, 00:30:34.538 "uuid": "18eb79c0-08fe-53b0-82ce-5c9f42c0bf17", 00:30:34.538 "assigned_rate_limits": { 00:30:34.538 "rw_ios_per_sec": 0, 00:30:34.538 "rw_mbytes_per_sec": 0, 00:30:34.538 "r_mbytes_per_sec": 0, 00:30:34.539 "w_mbytes_per_sec": 0 00:30:34.539 }, 00:30:34.539 "claimed": false, 00:30:34.539 "zoned": false, 00:30:34.539 "supported_io_types": { 00:30:34.539 "read": true, 00:30:34.539 "write": true, 00:30:34.539 "unmap": false, 00:30:34.539 "flush": false, 00:30:34.539 "reset": false, 00:30:34.539 "nvme_admin": false, 00:30:34.539 "nvme_io": false, 00:30:34.539 "nvme_io_md": false, 00:30:34.539 "write_zeroes": true, 00:30:34.539 "zcopy": false, 00:30:34.539 "get_zone_info": false, 00:30:34.539 "zone_management": false, 00:30:34.539 "zone_append": false, 00:30:34.539 "compare": false, 00:30:34.539 "compare_and_write": false, 00:30:34.539 "abort": false, 00:30:34.539 "seek_hole": false, 00:30:34.539 "seek_data": false, 00:30:34.539 "copy": false, 00:30:34.539 "nvme_iov_md": false 00:30:34.539 }, 00:30:34.539 "driver_specific": { 00:30:34.539 "compress": { 00:30:34.539 "name": "COMP_lvs0/lv0", 00:30:34.539 "base_bdev_name": "6c61b180-543d-4d36-ab51-48c63aff372d", 00:30:34.539 "pm_path": "/tmp/pmem/85c3d72c-4e63-4603-9b9e-f1a612a54333" 00:30:34.539 } 00:30:34.539 } 00:30:34.539 } 00:30:34.539 ] 00:30:34.539 13:30:44 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:34.539 13:30:44 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:34.539 Running I/O for 3 seconds... 00:30:37.824 00:30:37.824 Latency(us) 00:30:37.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.824 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:37.824 Verification LBA range: start 0x0 length 0x3100 00:30:37.824 COMP_lvs0/lv0 : 3.01 3452.23 13.49 0.00 0.00 9203.85 59.39 14260.63 00:30:37.824 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:37.824 Verification LBA range: start 0x3100 length 0x3100 00:30:37.824 COMP_lvs0/lv0 : 3.01 3466.19 13.54 0.00 0.00 9182.28 55.30 14470.35 00:30:37.824 =================================================================================================================== 00:30:37.824 Total : 6918.42 27.03 0.00 0.00 9193.05 55.30 14470.35 00:30:37.824 0 00:30:37.824 13:30:47 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:30:37.824 13:30:47 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:37.824 13:30:48 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:38.084 13:30:48 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:38.084 13:30:48 compress_isal -- compress/compress.sh@78 -- # killprocess 1040288 00:30:38.084 13:30:48 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 1040288 ']' 00:30:38.084 13:30:48 compress_isal -- common/autotest_common.sh@954 -- # kill -0 1040288 00:30:38.084 13:30:48 compress_isal -- common/autotest_common.sh@955 -- # uname 00:30:38.084 13:30:48 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:38.084 13:30:48 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040288 00:30:38.084 13:30:48 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:38.084 13:30:48 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:38.084 13:30:48 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040288' 00:30:38.084 killing process with pid 1040288 00:30:38.084 13:30:48 compress_isal -- common/autotest_common.sh@969 -- # kill 1040288 00:30:38.084 Received shutdown signal, test time was about 3.000000 seconds 00:30:38.084 00:30:38.084 Latency(us) 00:30:38.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.084 =================================================================================================================== 00:30:38.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:38.084 13:30:48 compress_isal -- common/autotest_common.sh@974 -- # wait 1040288 00:30:40.618 13:30:50 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:30:40.618 13:30:50 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:30:40.618 13:30:50 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1042579 00:30:40.618 13:30:50 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:40.618 13:30:50 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:30:40.618 13:30:50 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1042579 00:30:40.618 13:30:50 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 1042579 ']' 00:30:40.618 13:30:50 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.618 13:30:50 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:40.618 13:30:50 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.618 13:30:50 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:40.618 13:30:50 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:30:40.618 [2024-07-25 13:30:51.051975] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:30:40.618 [2024-07-25 13:30:51.052036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042579 ] 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:40.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:40.877 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:40.877 [2024-07-25 13:30:51.173304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:40.877 [2024-07-25 13:30:51.254131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.877 [2024-07-25 13:30:51.254137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.813 13:30:51 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:41.813 13:30:51 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:30:41.813 13:30:51 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:30:41.813 13:30:51 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:41.813 13:30:51 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:45.100 13:30:55 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:45.100 13:30:55 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:30:45.100 13:30:55 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:45.101 13:30:55 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:45.101 13:30:55 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:45.101 13:30:55 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:45.101 13:30:55 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:45.101 13:30:55 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:45.101 [ 00:30:45.101 { 00:30:45.101 "name": "Nvme0n1", 00:30:45.101 "aliases": [ 00:30:45.101 "abfc8a36-2835-4f09-b6d8-5b5d369eb441" 00:30:45.101 ], 00:30:45.101 "product_name": "NVMe disk", 00:30:45.101 "block_size": 512, 00:30:45.101 "num_blocks": 3907029168, 00:30:45.101 "uuid": "abfc8a36-2835-4f09-b6d8-5b5d369eb441", 00:30:45.101 "assigned_rate_limits": { 00:30:45.101 "rw_ios_per_sec": 0, 00:30:45.101 "rw_mbytes_per_sec": 0, 00:30:45.101 "r_mbytes_per_sec": 0, 00:30:45.101 "w_mbytes_per_sec": 0 00:30:45.101 }, 00:30:45.101 "claimed": false, 00:30:45.101 "zoned": false, 00:30:45.101 "supported_io_types": { 00:30:45.101 "read": true, 00:30:45.101 "write": true, 00:30:45.101 "unmap": true, 00:30:45.101 "flush": true, 00:30:45.101 "reset": true, 00:30:45.101 "nvme_admin": true, 00:30:45.101 "nvme_io": true, 00:30:45.101 "nvme_io_md": false, 00:30:45.101 "write_zeroes": true, 00:30:45.101 "zcopy": false, 00:30:45.101 "get_zone_info": false, 00:30:45.101 "zone_management": false, 00:30:45.101 "zone_append": false, 00:30:45.101 "compare": false, 00:30:45.101 "compare_and_write": false, 00:30:45.101 "abort": true, 00:30:45.101 "seek_hole": false, 00:30:45.101 "seek_data": false, 00:30:45.101 "copy": false, 00:30:45.101 "nvme_iov_md": false 00:30:45.101 }, 00:30:45.101 "driver_specific": { 00:30:45.101 "nvme": [ 00:30:45.101 { 00:30:45.101 "pci_address": "0000:d8:00.0", 00:30:45.101 "trid": { 00:30:45.101 "trtype": "PCIe", 00:30:45.101 "traddr": "0000:d8:00.0" 00:30:45.101 }, 00:30:45.101 "ctrlr_data": { 00:30:45.101 "cntlid": 0, 00:30:45.101 "vendor_id": "0x8086", 00:30:45.101 "model_number": "INTEL SSDPE2KX020T8", 00:30:45.101 "serial_number": "BTLJ125505KA2P0BGN", 00:30:45.101 "firmware_revision": "VDV10170", 00:30:45.101 "oacs": { 00:30:45.101 "security": 0, 00:30:45.101 "format": 1, 00:30:45.101 "firmware": 1, 00:30:45.101 "ns_manage": 1 00:30:45.101 }, 00:30:45.101 "multi_ctrlr": false, 00:30:45.101 "ana_reporting": false 00:30:45.101 }, 00:30:45.101 "vs": { 00:30:45.101 "nvme_version": "1.2" 00:30:45.101 }, 00:30:45.101 "ns_data": { 00:30:45.101 "id": 1, 00:30:45.101 "can_share": false 00:30:45.101 } 00:30:45.101 } 00:30:45.101 ], 00:30:45.101 "mp_policy": "active_passive" 00:30:45.101 } 00:30:45.101 } 00:30:45.101 ] 00:30:45.101 13:30:55 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:45.101 13:30:55 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:46.478 13ac9f82-e502-4ec9-b2d2-160584ce3f61 00:30:46.478 13:30:56 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:46.478 7f527367-e846-4936-ba4c-7f123677763a 00:30:46.478 13:30:56 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:46.478 13:30:56 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:30:46.478 13:30:56 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:46.478 13:30:56 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:46.478 13:30:56 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:46.478 13:30:56 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:46.478 13:30:56 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:46.737 13:30:57 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:46.996 [ 00:30:46.996 { 00:30:46.996 "name": "7f527367-e846-4936-ba4c-7f123677763a", 00:30:46.996 "aliases": [ 00:30:46.996 "lvs0/lv0" 00:30:46.996 ], 00:30:46.996 "product_name": "Logical Volume", 00:30:46.996 "block_size": 512, 00:30:46.996 "num_blocks": 204800, 00:30:46.996 "uuid": "7f527367-e846-4936-ba4c-7f123677763a", 00:30:46.996 "assigned_rate_limits": { 00:30:46.996 "rw_ios_per_sec": 0, 00:30:46.996 "rw_mbytes_per_sec": 0, 00:30:46.996 "r_mbytes_per_sec": 0, 00:30:46.996 "w_mbytes_per_sec": 0 00:30:46.996 }, 00:30:46.996 "claimed": false, 00:30:46.996 "zoned": false, 00:30:46.996 "supported_io_types": { 00:30:46.996 "read": true, 00:30:46.996 "write": true, 00:30:46.996 "unmap": true, 00:30:46.996 "flush": false, 00:30:46.996 "reset": true, 00:30:46.996 "nvme_admin": false, 00:30:46.996 "nvme_io": false, 00:30:46.996 "nvme_io_md": false, 00:30:46.996 "write_zeroes": true, 00:30:46.996 "zcopy": false, 00:30:46.996 "get_zone_info": false, 00:30:46.996 "zone_management": false, 00:30:46.996 "zone_append": false, 00:30:46.996 "compare": false, 00:30:46.996 "compare_and_write": false, 00:30:46.996 "abort": false, 00:30:46.996 "seek_hole": true, 00:30:46.996 "seek_data": true, 00:30:46.996 "copy": false, 00:30:46.996 "nvme_iov_md": false 00:30:46.996 }, 00:30:46.996 "driver_specific": { 00:30:46.996 "lvol": { 00:30:46.996 "lvol_store_uuid": "13ac9f82-e502-4ec9-b2d2-160584ce3f61", 00:30:46.996 "base_bdev": "Nvme0n1", 00:30:46.996 "thin_provision": true, 00:30:46.996 "num_allocated_clusters": 0, 00:30:46.996 "snapshot": false, 00:30:46.996 "clone": false, 00:30:46.996 "esnap_clone": false 00:30:46.996 } 00:30:46.996 } 00:30:46.996 } 00:30:46.996 ] 00:30:46.996 13:30:57 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:46.996 13:30:57 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:30:46.996 13:30:57 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:30:47.256 [2024-07-25 13:30:57.576036] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:47.256 COMP_lvs0/lv0 00:30:47.256 13:30:57 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:47.256 13:30:57 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:30:47.256 13:30:57 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:47.256 13:30:57 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:47.256 13:30:57 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:47.256 13:30:57 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:47.256 13:30:57 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:47.514 13:30:57 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:47.772 [ 00:30:47.772 { 00:30:47.772 "name": "COMP_lvs0/lv0", 00:30:47.772 "aliases": [ 00:30:47.772 "8e64e66c-8b3f-5dd1-9880-aae7b939ee02" 00:30:47.772 ], 00:30:47.772 "product_name": "compress", 00:30:47.772 "block_size": 4096, 00:30:47.772 "num_blocks": 25088, 00:30:47.772 "uuid": "8e64e66c-8b3f-5dd1-9880-aae7b939ee02", 00:30:47.772 "assigned_rate_limits": { 00:30:47.772 "rw_ios_per_sec": 0, 00:30:47.772 "rw_mbytes_per_sec": 0, 00:30:47.772 "r_mbytes_per_sec": 0, 00:30:47.772 "w_mbytes_per_sec": 0 00:30:47.772 }, 00:30:47.772 "claimed": false, 00:30:47.772 "zoned": false, 00:30:47.772 "supported_io_types": { 00:30:47.772 "read": true, 00:30:47.772 "write": true, 00:30:47.772 "unmap": false, 00:30:47.772 "flush": false, 00:30:47.772 "reset": false, 00:30:47.772 "nvme_admin": false, 00:30:47.772 "nvme_io": false, 00:30:47.772 "nvme_io_md": false, 00:30:47.772 "write_zeroes": true, 00:30:47.772 "zcopy": false, 00:30:47.772 "get_zone_info": false, 00:30:47.772 "zone_management": false, 00:30:47.772 "zone_append": false, 00:30:47.772 "compare": false, 00:30:47.772 "compare_and_write": false, 00:30:47.772 "abort": false, 00:30:47.772 "seek_hole": false, 00:30:47.772 "seek_data": false, 00:30:47.772 "copy": false, 00:30:47.772 "nvme_iov_md": false 00:30:47.772 }, 00:30:47.772 "driver_specific": { 00:30:47.772 "compress": { 00:30:47.772 "name": "COMP_lvs0/lv0", 00:30:47.772 "base_bdev_name": "7f527367-e846-4936-ba4c-7f123677763a", 00:30:47.772 "pm_path": "/tmp/pmem/358650df-5593-4964-b264-94304784f17d" 00:30:47.772 } 00:30:47.772 } 00:30:47.772 } 00:30:47.772 ] 00:30:47.772 13:30:58 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:47.772 13:30:58 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:47.772 Running I/O for 3 seconds... 00:30:51.065 00:30:51.065 Latency(us) 00:30:51.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.065 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:51.065 Verification LBA range: start 0x0 length 0x3100 00:30:51.065 COMP_lvs0/lv0 : 3.00 3538.57 13.82 0.00 0.00 8993.80 58.98 14575.21 00:30:51.065 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:51.065 Verification LBA range: start 0x3100 length 0x3100 00:30:51.065 COMP_lvs0/lv0 : 3.01 3563.25 13.92 0.00 0.00 8935.21 56.52 14784.92 00:30:51.065 =================================================================================================================== 00:30:51.065 Total : 7101.82 27.74 0.00 0.00 8964.39 56.52 14784.92 00:30:51.065 0 00:30:51.065 13:31:01 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:30:51.065 13:31:01 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:51.065 13:31:01 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:51.324 13:31:01 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:51.324 13:31:01 compress_isal -- compress/compress.sh@78 -- # killprocess 1042579 00:30:51.324 13:31:01 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 1042579 ']' 00:30:51.324 13:31:01 compress_isal -- common/autotest_common.sh@954 -- # kill -0 1042579 00:30:51.324 13:31:01 compress_isal -- common/autotest_common.sh@955 -- # uname 00:30:51.324 13:31:01 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:51.324 13:31:01 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1042579 00:30:51.324 13:31:01 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:51.324 13:31:01 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:51.324 13:31:01 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1042579' 00:30:51.324 killing process with pid 1042579 00:30:51.324 13:31:01 compress_isal -- common/autotest_common.sh@969 -- # kill 1042579 00:30:51.324 Received shutdown signal, test time was about 3.000000 seconds 00:30:51.324 00:30:51.324 Latency(us) 00:30:51.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.324 =================================================================================================================== 00:30:51.324 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:51.324 13:31:01 compress_isal -- common/autotest_common.sh@974 -- # wait 1042579 00:30:53.920 13:31:04 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:30:53.920 13:31:04 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:30:53.920 13:31:04 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=1044715 00:30:53.920 13:31:04 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:53.920 13:31:04 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:30:53.920 13:31:04 compress_isal -- compress/compress.sh@57 -- # waitforlisten 1044715 00:30:53.920 13:31:04 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 1044715 ']' 00:30:53.920 13:31:04 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.920 13:31:04 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:53.920 13:31:04 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.920 13:31:04 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:53.920 13:31:04 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:30:53.920 [2024-07-25 13:31:04.105754] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:30:53.920 [2024-07-25 13:31:04.105829] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044715 ] 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.920 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:53.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.921 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:53.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.921 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:53.921 [2024-07-25 13:31:04.237808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:53.921 [2024-07-25 13:31:04.326185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.921 [2024-07-25 13:31:04.326282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:53.921 [2024-07-25 13:31:04.326287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.856 13:31:05 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:54.856 13:31:05 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:30:54.856 13:31:05 compress_isal -- compress/compress.sh@58 -- # create_vols 00:30:54.856 13:31:05 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:54.856 13:31:05 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:58.177 13:31:08 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:58.177 13:31:08 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:30:58.177 13:31:08 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:58.177 13:31:08 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:58.177 13:31:08 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:58.177 13:31:08 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:58.177 13:31:08 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:58.177 13:31:08 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:58.177 [ 00:30:58.177 { 00:30:58.177 "name": "Nvme0n1", 00:30:58.177 "aliases": [ 00:30:58.177 "82622841-21ab-4924-8cee-69426934793a" 00:30:58.177 ], 00:30:58.177 "product_name": "NVMe disk", 00:30:58.177 "block_size": 512, 00:30:58.177 "num_blocks": 3907029168, 00:30:58.177 "uuid": "82622841-21ab-4924-8cee-69426934793a", 00:30:58.177 "assigned_rate_limits": { 00:30:58.177 "rw_ios_per_sec": 0, 00:30:58.177 "rw_mbytes_per_sec": 0, 00:30:58.177 "r_mbytes_per_sec": 0, 00:30:58.177 "w_mbytes_per_sec": 0 00:30:58.177 }, 00:30:58.177 "claimed": false, 00:30:58.177 "zoned": false, 00:30:58.177 "supported_io_types": { 00:30:58.177 "read": true, 00:30:58.177 "write": true, 00:30:58.177 "unmap": true, 00:30:58.177 "flush": true, 00:30:58.177 "reset": true, 00:30:58.177 "nvme_admin": true, 00:30:58.177 "nvme_io": true, 00:30:58.177 "nvme_io_md": false, 00:30:58.177 "write_zeroes": true, 00:30:58.177 "zcopy": false, 00:30:58.177 "get_zone_info": false, 00:30:58.177 "zone_management": false, 00:30:58.177 "zone_append": false, 00:30:58.177 "compare": false, 00:30:58.177 "compare_and_write": false, 00:30:58.177 "abort": true, 00:30:58.177 "seek_hole": false, 00:30:58.177 "seek_data": false, 00:30:58.177 "copy": false, 00:30:58.177 "nvme_iov_md": false 00:30:58.177 }, 00:30:58.177 "driver_specific": { 00:30:58.177 "nvme": [ 00:30:58.177 { 00:30:58.177 "pci_address": "0000:d8:00.0", 00:30:58.177 "trid": { 00:30:58.177 "trtype": "PCIe", 00:30:58.177 "traddr": "0000:d8:00.0" 00:30:58.177 }, 00:30:58.177 "ctrlr_data": { 00:30:58.177 "cntlid": 0, 00:30:58.177 "vendor_id": "0x8086", 00:30:58.177 "model_number": "INTEL SSDPE2KX020T8", 00:30:58.177 "serial_number": "BTLJ125505KA2P0BGN", 00:30:58.177 "firmware_revision": "VDV10170", 00:30:58.177 "oacs": { 00:30:58.177 "security": 0, 00:30:58.177 "format": 1, 00:30:58.177 "firmware": 1, 00:30:58.177 "ns_manage": 1 00:30:58.177 }, 00:30:58.177 "multi_ctrlr": false, 00:30:58.177 "ana_reporting": false 00:30:58.177 }, 00:30:58.177 "vs": { 00:30:58.177 "nvme_version": "1.2" 00:30:58.177 }, 00:30:58.177 "ns_data": { 00:30:58.177 "id": 1, 00:30:58.177 "can_share": false 00:30:58.177 } 00:30:58.177 } 00:30:58.177 ], 00:30:58.177 "mp_policy": "active_passive" 00:30:58.177 } 00:30:58.177 } 00:30:58.177 ] 00:30:58.177 13:31:08 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:58.177 13:31:08 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:59.554 f54c7e88-32b0-48a3-a1c9-be7432e883f8 00:30:59.554 13:31:09 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:59.812 3032155d-fd53-419a-97db-e19575f9ee5f 00:30:59.812 13:31:10 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:59.812 13:31:10 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:30:59.812 13:31:10 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:59.812 13:31:10 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:59.812 13:31:10 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:59.812 13:31:10 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:59.812 13:31:10 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:00.070 13:31:10 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:00.070 [ 00:31:00.070 { 00:31:00.070 "name": "3032155d-fd53-419a-97db-e19575f9ee5f", 00:31:00.070 "aliases": [ 00:31:00.070 "lvs0/lv0" 00:31:00.070 ], 00:31:00.070 "product_name": "Logical Volume", 00:31:00.070 "block_size": 512, 00:31:00.070 "num_blocks": 204800, 00:31:00.070 "uuid": "3032155d-fd53-419a-97db-e19575f9ee5f", 00:31:00.070 "assigned_rate_limits": { 00:31:00.070 "rw_ios_per_sec": 0, 00:31:00.070 "rw_mbytes_per_sec": 0, 00:31:00.070 "r_mbytes_per_sec": 0, 00:31:00.070 "w_mbytes_per_sec": 0 00:31:00.070 }, 00:31:00.070 "claimed": false, 00:31:00.070 "zoned": false, 00:31:00.070 "supported_io_types": { 00:31:00.070 "read": true, 00:31:00.070 "write": true, 00:31:00.070 "unmap": true, 00:31:00.070 "flush": false, 00:31:00.070 "reset": true, 00:31:00.070 "nvme_admin": false, 00:31:00.070 "nvme_io": false, 00:31:00.070 "nvme_io_md": false, 00:31:00.070 "write_zeroes": true, 00:31:00.070 "zcopy": false, 00:31:00.070 "get_zone_info": false, 00:31:00.070 "zone_management": false, 00:31:00.070 "zone_append": false, 00:31:00.070 "compare": false, 00:31:00.070 "compare_and_write": false, 00:31:00.070 "abort": false, 00:31:00.070 "seek_hole": true, 00:31:00.070 "seek_data": true, 00:31:00.070 "copy": false, 00:31:00.070 "nvme_iov_md": false 00:31:00.070 }, 00:31:00.070 "driver_specific": { 00:31:00.070 "lvol": { 00:31:00.070 "lvol_store_uuid": "f54c7e88-32b0-48a3-a1c9-be7432e883f8", 00:31:00.070 "base_bdev": "Nvme0n1", 00:31:00.070 "thin_provision": true, 00:31:00.070 "num_allocated_clusters": 0, 00:31:00.070 "snapshot": false, 00:31:00.070 "clone": false, 00:31:00.070 "esnap_clone": false 00:31:00.070 } 00:31:00.070 } 00:31:00.071 } 00:31:00.071 ] 00:31:00.329 13:31:10 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:31:00.329 13:31:10 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:31:00.329 13:31:10 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:31:00.329 [2024-07-25 13:31:10.744531] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:00.329 COMP_lvs0/lv0 00:31:00.329 13:31:10 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:00.329 13:31:10 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:31:00.329 13:31:10 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:00.329 13:31:10 compress_isal -- common/autotest_common.sh@901 -- # local i 00:31:00.329 13:31:10 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:00.329 13:31:10 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:00.329 13:31:10 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:00.593 13:31:11 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:00.851 [ 00:31:00.851 { 00:31:00.851 "name": "COMP_lvs0/lv0", 00:31:00.851 "aliases": [ 00:31:00.851 "c13aa68d-efd5-5f89-a9d6-506eaa387d53" 00:31:00.851 ], 00:31:00.851 "product_name": "compress", 00:31:00.851 "block_size": 512, 00:31:00.851 "num_blocks": 200704, 00:31:00.851 "uuid": "c13aa68d-efd5-5f89-a9d6-506eaa387d53", 00:31:00.851 "assigned_rate_limits": { 00:31:00.851 "rw_ios_per_sec": 0, 00:31:00.851 "rw_mbytes_per_sec": 0, 00:31:00.851 "r_mbytes_per_sec": 0, 00:31:00.851 "w_mbytes_per_sec": 0 00:31:00.851 }, 00:31:00.851 "claimed": false, 00:31:00.851 "zoned": false, 00:31:00.851 "supported_io_types": { 00:31:00.851 "read": true, 00:31:00.851 "write": true, 00:31:00.851 "unmap": false, 00:31:00.851 "flush": false, 00:31:00.851 "reset": false, 00:31:00.851 "nvme_admin": false, 00:31:00.851 "nvme_io": false, 00:31:00.851 "nvme_io_md": false, 00:31:00.851 "write_zeroes": true, 00:31:00.851 "zcopy": false, 00:31:00.851 "get_zone_info": false, 00:31:00.851 "zone_management": false, 00:31:00.851 "zone_append": false, 00:31:00.851 "compare": false, 00:31:00.851 "compare_and_write": false, 00:31:00.851 "abort": false, 00:31:00.851 "seek_hole": false, 00:31:00.851 "seek_data": false, 00:31:00.851 "copy": false, 00:31:00.851 "nvme_iov_md": false 00:31:00.851 }, 00:31:00.851 "driver_specific": { 00:31:00.851 "compress": { 00:31:00.851 "name": "COMP_lvs0/lv0", 00:31:00.851 "base_bdev_name": "3032155d-fd53-419a-97db-e19575f9ee5f", 00:31:00.851 "pm_path": "/tmp/pmem/fa7a6f9d-2334-45b4-b3bc-e7b1538ed0a2" 00:31:00.851 } 00:31:00.851 } 00:31:00.851 } 00:31:00.851 ] 00:31:00.851 13:31:11 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:31:00.851 13:31:11 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:01.109 I/O targets: 00:31:01.109 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:31:01.109 00:31:01.109 00:31:01.109 CUnit - A unit testing framework for C - Version 2.1-3 00:31:01.109 http://cunit.sourceforge.net/ 00:31:01.109 00:31:01.109 00:31:01.109 Suite: bdevio tests on: COMP_lvs0/lv0 00:31:01.109 Test: blockdev write read block ...passed 00:31:01.109 Test: blockdev write zeroes read block ...passed 00:31:01.109 Test: blockdev write zeroes read no split ...passed 00:31:01.109 Test: blockdev write zeroes read split ...passed 00:31:01.109 Test: blockdev write zeroes read split partial ...passed 00:31:01.109 Test: blockdev reset ...[2024-07-25 13:31:11.412337] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:31:01.109 passed 00:31:01.109 Test: blockdev write read 8 blocks ...passed 00:31:01.109 Test: blockdev write read size > 128k ...passed 00:31:01.109 Test: blockdev write read invalid size ...passed 00:31:01.109 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:01.109 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:01.109 Test: blockdev write read max offset ...passed 00:31:01.110 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:01.110 Test: blockdev writev readv 8 blocks ...passed 00:31:01.110 Test: blockdev writev readv 30 x 1block ...passed 00:31:01.110 Test: blockdev writev readv block ...passed 00:31:01.110 Test: blockdev writev readv size > 128k ...passed 00:31:01.110 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:01.110 Test: blockdev comparev and writev ...passed 00:31:01.110 Test: blockdev nvme passthru rw ...passed 00:31:01.110 Test: blockdev nvme passthru vendor specific ...passed 00:31:01.110 Test: blockdev nvme admin passthru ...passed 00:31:01.110 Test: blockdev copy ...passed 00:31:01.110 00:31:01.110 Run Summary: Type Total Ran Passed Failed Inactive 00:31:01.110 suites 1 1 n/a 0 0 00:31:01.110 tests 23 23 23 0 0 00:31:01.110 asserts 130 130 130 0 n/a 00:31:01.110 00:31:01.110 Elapsed time = 0.207 seconds 00:31:01.110 0 00:31:01.110 13:31:11 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:31:01.110 13:31:11 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:01.368 13:31:11 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:01.626 13:31:11 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:31:01.626 13:31:11 compress_isal -- compress/compress.sh@62 -- # killprocess 1044715 00:31:01.626 13:31:11 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 1044715 ']' 00:31:01.626 13:31:11 compress_isal -- common/autotest_common.sh@954 -- # kill -0 1044715 00:31:01.626 13:31:11 compress_isal -- common/autotest_common.sh@955 -- # uname 00:31:01.626 13:31:11 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:01.626 13:31:11 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1044715 00:31:01.626 13:31:11 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:01.626 13:31:11 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:01.626 13:31:11 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1044715' 00:31:01.626 killing process with pid 1044715 00:31:01.626 13:31:11 compress_isal -- common/autotest_common.sh@969 -- # kill 1044715 00:31:01.626 13:31:11 compress_isal -- common/autotest_common.sh@974 -- # wait 1044715 00:31:04.159 13:31:14 compress_isal -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:31:04.159 13:31:14 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:31:04.160 00:31:04.160 real 0m49.845s 00:31:04.160 user 1m53.859s 00:31:04.160 sys 0m4.075s 00:31:04.160 13:31:14 compress_isal -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:04.160 13:31:14 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:31:04.160 ************************************ 00:31:04.160 END TEST compress_isal 00:31:04.160 ************************************ 00:31:04.160 13:31:14 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:04.160 13:31:14 -- spdk/autotest.sh@360 -- # '[' 1 -eq 1 ']' 00:31:04.160 13:31:14 -- spdk/autotest.sh@361 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:31:04.160 13:31:14 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:04.160 13:31:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:04.160 13:31:14 -- common/autotest_common.sh@10 -- # set +x 00:31:04.160 ************************************ 00:31:04.160 START TEST blockdev_crypto_aesni 00:31:04.160 ************************************ 00:31:04.160 13:31:14 blockdev_crypto_aesni -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:31:04.160 * Looking for test storage... 00:31:04.160 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # uname -s 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@681 -- # test_type=crypto_aesni 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # crypto_device= 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # dek= 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # env_ctx= 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == bdev ]] 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == crypto_* ]] 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1046402 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 1046402 00:31:04.160 13:31:14 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:31:04.160 13:31:14 blockdev_crypto_aesni -- common/autotest_common.sh@831 -- # '[' -z 1046402 ']' 00:31:04.160 13:31:14 blockdev_crypto_aesni -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.160 13:31:14 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:04.160 13:31:14 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.160 13:31:14 blockdev_crypto_aesni -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:04.160 13:31:14 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:04.160 [2024-07-25 13:31:14.500937] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:31:04.160 [2024-07-25 13:31:14.501005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046402 ] 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:04.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.160 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:04.160 [2024-07-25 13:31:14.632802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.419 [2024-07-25 13:31:14.719860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.986 13:31:15 blockdev_crypto_aesni -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:04.986 13:31:15 blockdev_crypto_aesni -- common/autotest_common.sh@864 -- # return 0 00:31:04.986 13:31:15 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:31:04.986 13:31:15 blockdev_crypto_aesni -- bdev/blockdev.sh@704 -- # setup_crypto_aesni_conf 00:31:04.986 13:31:15 blockdev_crypto_aesni -- bdev/blockdev.sh@145 -- # rpc_cmd 00:31:04.986 13:31:15 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.986 13:31:15 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:04.986 [2024-07-25 13:31:15.401980] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:04.986 [2024-07-25 13:31:15.410013] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:04.986 [2024-07-25 13:31:15.418031] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:05.245 [2024-07-25 13:31:15.484436] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:07.779 true 00:31:07.779 true 00:31:07.779 true 00:31:07.779 true 00:31:07.779 Malloc0 00:31:07.779 Malloc1 00:31:07.779 Malloc2 00:31:07.779 Malloc3 00:31:07.779 [2024-07-25 13:31:17.818612] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:07.779 crypto_ram 00:31:07.779 [2024-07-25 13:31:17.826633] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:07.779 crypto_ram2 00:31:07.779 [2024-07-25 13:31:17.834653] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:07.779 crypto_ram3 00:31:07.779 [2024-07-25 13:31:17.842677] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:07.779 crypto_ram4 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.779 13:31:17 blockdev_crypto_aesni -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.779 13:31:17 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # cat 00:31:07.779 13:31:17 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.779 13:31:17 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.779 13:31:17 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:07.779 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.779 13:31:17 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:31:07.779 13:31:17 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:31:07.779 13:31:17 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:31:07.780 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.780 13:31:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:07.780 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.780 13:31:18 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:31:07.780 13:31:18 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r .name 00:31:07.780 13:31:18 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "732d8d70-c33d-5452-9efb-951aa993bf1e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "732d8d70-c33d-5452-9efb-951aa993bf1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "2e03ffbf-8458-5343-af47-b12969acca36"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2e03ffbf-8458-5343-af47-b12969acca36",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "9bdd9a5c-dfcb-5022-a1dd-64e64d55b086"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "9bdd9a5c-dfcb-5022-a1dd-64e64d55b086",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "b7fa5bb6-18ab-5544-b85d-358c4751f5de"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "b7fa5bb6-18ab-5544-b85d-358c4751f5de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:31:07.780 13:31:18 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:31:07.780 13:31:18 blockdev_crypto_aesni -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:31:07.780 13:31:18 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:31:07.780 13:31:18 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # killprocess 1046402 00:31:07.780 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@950 -- # '[' -z 1046402 ']' 00:31:07.780 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # kill -0 1046402 00:31:07.780 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # uname 00:31:07.780 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:07.780 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1046402 00:31:07.780 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:07.780 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:07.780 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1046402' 00:31:07.780 killing process with pid 1046402 00:31:07.780 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@969 -- # kill 1046402 00:31:07.780 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@974 -- # wait 1046402 00:31:08.348 13:31:18 blockdev_crypto_aesni -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:08.348 13:31:18 blockdev_crypto_aesni -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:31:08.348 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:31:08.348 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:08.348 13:31:18 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:08.348 ************************************ 00:31:08.348 START TEST bdev_hello_world 00:31:08.348 ************************************ 00:31:08.348 13:31:18 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:31:08.348 [2024-07-25 13:31:18.687101] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:31:08.348 [2024-07-25 13:31:18.687161] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047200 ] 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:08.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.348 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:08.348 [2024-07-25 13:31:18.818496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.606 [2024-07-25 13:31:18.902463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.606 [2024-07-25 13:31:18.923692] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:08.606 [2024-07-25 13:31:18.931731] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:08.606 [2024-07-25 13:31:18.939740] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:08.606 [2024-07-25 13:31:19.044463] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:11.171 [2024-07-25 13:31:21.226679] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:11.171 [2024-07-25 13:31:21.226740] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:11.171 [2024-07-25 13:31:21.226754] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:11.171 [2024-07-25 13:31:21.234698] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:11.171 [2024-07-25 13:31:21.234716] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:11.171 [2024-07-25 13:31:21.234727] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:11.171 [2024-07-25 13:31:21.242718] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:11.171 [2024-07-25 13:31:21.242734] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:11.171 [2024-07-25 13:31:21.242745] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:11.171 [2024-07-25 13:31:21.250739] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:11.171 [2024-07-25 13:31:21.250755] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:11.171 [2024-07-25 13:31:21.250765] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:11.171 [2024-07-25 13:31:21.321850] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:11.171 [2024-07-25 13:31:21.321890] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:31:11.171 [2024-07-25 13:31:21.321906] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:11.171 [2024-07-25 13:31:21.323078] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:11.171 [2024-07-25 13:31:21.323151] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:11.171 [2024-07-25 13:31:21.323168] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:11.171 [2024-07-25 13:31:21.323210] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:11.171 00:31:11.171 [2024-07-25 13:31:21.323228] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:11.171 00:31:11.171 real 0m3.011s 00:31:11.171 user 0m2.641s 00:31:11.171 sys 0m0.332s 00:31:11.171 13:31:21 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:11.171 13:31:21 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:31:11.171 ************************************ 00:31:11.171 END TEST bdev_hello_world 00:31:11.171 ************************************ 00:31:11.430 13:31:21 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:31:11.430 13:31:21 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:11.430 13:31:21 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:11.430 13:31:21 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:11.430 ************************************ 00:31:11.430 START TEST bdev_bounds 00:31:11.430 ************************************ 00:31:11.430 13:31:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:31:11.430 13:31:21 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=1047747 00:31:11.430 13:31:21 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:11.430 13:31:21 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:31:11.430 13:31:21 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 1047747' 00:31:11.430 Process bdevio pid: 1047747 00:31:11.430 13:31:21 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 1047747 00:31:11.430 13:31:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 1047747 ']' 00:31:11.430 13:31:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.430 13:31:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:11.430 13:31:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.430 13:31:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:11.430 13:31:21 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:11.430 [2024-07-25 13:31:21.780799] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:31:11.430 [2024-07-25 13:31:21.780855] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047747 ] 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.430 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:11.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:11.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:11.431 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:11.431 [2024-07-25 13:31:21.914079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:11.689 [2024-07-25 13:31:22.002523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.689 [2024-07-25 13:31:22.002616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:11.690 [2024-07-25 13:31:22.002618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.690 [2024-07-25 13:31:22.024038] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:11.690 [2024-07-25 13:31:22.032059] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:11.690 [2024-07-25 13:31:22.040080] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:11.690 [2024-07-25 13:31:22.137381] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:14.225 [2024-07-25 13:31:24.309505] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:14.225 [2024-07-25 13:31:24.309579] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:14.225 [2024-07-25 13:31:24.309593] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:14.225 [2024-07-25 13:31:24.317524] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:14.225 [2024-07-25 13:31:24.317542] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:14.225 [2024-07-25 13:31:24.317553] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:14.225 [2024-07-25 13:31:24.325543] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:14.225 [2024-07-25 13:31:24.325559] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:14.225 [2024-07-25 13:31:24.325570] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:14.225 [2024-07-25 13:31:24.333566] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:14.225 [2024-07-25 13:31:24.333582] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:14.225 [2024-07-25 13:31:24.333593] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:14.225 13:31:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:14.225 13:31:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:31:14.225 13:31:24 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:14.225 I/O targets: 00:31:14.225 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:31:14.225 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:31:14.225 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:31:14.225 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:31:14.225 00:31:14.225 00:31:14.225 CUnit - A unit testing framework for C - Version 2.1-3 00:31:14.225 http://cunit.sourceforge.net/ 00:31:14.225 00:31:14.225 00:31:14.225 Suite: bdevio tests on: crypto_ram4 00:31:14.225 Test: blockdev write read block ...passed 00:31:14.225 Test: blockdev write zeroes read block ...passed 00:31:14.225 Test: blockdev write zeroes read no split ...passed 00:31:14.225 Test: blockdev write zeroes read split ...passed 00:31:14.225 Test: blockdev write zeroes read split partial ...passed 00:31:14.225 Test: blockdev reset ...passed 00:31:14.225 Test: blockdev write read 8 blocks ...passed 00:31:14.225 Test: blockdev write read size > 128k ...passed 00:31:14.225 Test: blockdev write read invalid size ...passed 00:31:14.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:14.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:14.225 Test: blockdev write read max offset ...passed 00:31:14.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:14.225 Test: blockdev writev readv 8 blocks ...passed 00:31:14.225 Test: blockdev writev readv 30 x 1block ...passed 00:31:14.225 Test: blockdev writev readv block ...passed 00:31:14.225 Test: blockdev writev readv size > 128k ...passed 00:31:14.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:14.225 Test: blockdev comparev and writev ...passed 00:31:14.225 Test: blockdev nvme passthru rw ...passed 00:31:14.225 Test: blockdev nvme passthru vendor specific ...passed 00:31:14.225 Test: blockdev nvme admin passthru ...passed 00:31:14.225 Test: blockdev copy ...passed 00:31:14.225 Suite: bdevio tests on: crypto_ram3 00:31:14.225 Test: blockdev write read block ...passed 00:31:14.225 Test: blockdev write zeroes read block ...passed 00:31:14.225 Test: blockdev write zeroes read no split ...passed 00:31:14.225 Test: blockdev write zeroes read split ...passed 00:31:14.225 Test: blockdev write zeroes read split partial ...passed 00:31:14.225 Test: blockdev reset ...passed 00:31:14.225 Test: blockdev write read 8 blocks ...passed 00:31:14.225 Test: blockdev write read size > 128k ...passed 00:31:14.225 Test: blockdev write read invalid size ...passed 00:31:14.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:14.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:14.225 Test: blockdev write read max offset ...passed 00:31:14.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:14.225 Test: blockdev writev readv 8 blocks ...passed 00:31:14.225 Test: blockdev writev readv 30 x 1block ...passed 00:31:14.225 Test: blockdev writev readv block ...passed 00:31:14.225 Test: blockdev writev readv size > 128k ...passed 00:31:14.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:14.225 Test: blockdev comparev and writev ...passed 00:31:14.225 Test: blockdev nvme passthru rw ...passed 00:31:14.225 Test: blockdev nvme passthru vendor specific ...passed 00:31:14.225 Test: blockdev nvme admin passthru ...passed 00:31:14.225 Test: blockdev copy ...passed 00:31:14.225 Suite: bdevio tests on: crypto_ram2 00:31:14.225 Test: blockdev write read block ...passed 00:31:14.225 Test: blockdev write zeroes read block ...passed 00:31:14.225 Test: blockdev write zeroes read no split ...passed 00:31:14.225 Test: blockdev write zeroes read split ...passed 00:31:14.225 Test: blockdev write zeroes read split partial ...passed 00:31:14.225 Test: blockdev reset ...passed 00:31:14.225 Test: blockdev write read 8 blocks ...passed 00:31:14.225 Test: blockdev write read size > 128k ...passed 00:31:14.225 Test: blockdev write read invalid size ...passed 00:31:14.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:14.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:14.225 Test: blockdev write read max offset ...passed 00:31:14.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:14.225 Test: blockdev writev readv 8 blocks ...passed 00:31:14.225 Test: blockdev writev readv 30 x 1block ...passed 00:31:14.225 Test: blockdev writev readv block ...passed 00:31:14.225 Test: blockdev writev readv size > 128k ...passed 00:31:14.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:14.225 Test: blockdev comparev and writev ...passed 00:31:14.225 Test: blockdev nvme passthru rw ...passed 00:31:14.225 Test: blockdev nvme passthru vendor specific ...passed 00:31:14.225 Test: blockdev nvme admin passthru ...passed 00:31:14.225 Test: blockdev copy ...passed 00:31:14.225 Suite: bdevio tests on: crypto_ram 00:31:14.225 Test: blockdev write read block ...passed 00:31:14.225 Test: blockdev write zeroes read block ...passed 00:31:14.225 Test: blockdev write zeroes read no split ...passed 00:31:14.484 Test: blockdev write zeroes read split ...passed 00:31:14.484 Test: blockdev write zeroes read split partial ...passed 00:31:14.484 Test: blockdev reset ...passed 00:31:14.484 Test: blockdev write read 8 blocks ...passed 00:31:14.484 Test: blockdev write read size > 128k ...passed 00:31:14.484 Test: blockdev write read invalid size ...passed 00:31:14.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:14.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:14.484 Test: blockdev write read max offset ...passed 00:31:14.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:14.484 Test: blockdev writev readv 8 blocks ...passed 00:31:14.484 Test: blockdev writev readv 30 x 1block ...passed 00:31:14.484 Test: blockdev writev readv block ...passed 00:31:14.484 Test: blockdev writev readv size > 128k ...passed 00:31:14.484 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:14.484 Test: blockdev comparev and writev ...passed 00:31:14.484 Test: blockdev nvme passthru rw ...passed 00:31:14.484 Test: blockdev nvme passthru vendor specific ...passed 00:31:14.484 Test: blockdev nvme admin passthru ...passed 00:31:14.484 Test: blockdev copy ...passed 00:31:14.484 00:31:14.484 Run Summary: Type Total Ran Passed Failed Inactive 00:31:14.484 suites 4 4 n/a 0 0 00:31:14.484 tests 92 92 92 0 0 00:31:14.484 asserts 520 520 520 0 n/a 00:31:14.484 00:31:14.484 Elapsed time = 0.511 seconds 00:31:14.484 0 00:31:14.484 13:31:24 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 1047747 00:31:14.484 13:31:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 1047747 ']' 00:31:14.484 13:31:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 1047747 00:31:14.485 13:31:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:31:14.485 13:31:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:14.485 13:31:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1047747 00:31:14.485 13:31:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:14.485 13:31:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:14.485 13:31:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1047747' 00:31:14.485 killing process with pid 1047747 00:31:14.485 13:31:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@969 -- # kill 1047747 00:31:14.485 13:31:24 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@974 -- # wait 1047747 00:31:14.744 13:31:25 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:31:14.744 00:31:14.744 real 0m3.463s 00:31:14.744 user 0m9.662s 00:31:14.744 sys 0m0.537s 00:31:14.744 13:31:25 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:14.744 13:31:25 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:14.744 ************************************ 00:31:14.744 END TEST bdev_bounds 00:31:14.744 ************************************ 00:31:14.744 13:31:25 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:31:14.744 13:31:25 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:14.744 13:31:25 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:14.744 13:31:25 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:15.003 ************************************ 00:31:15.003 START TEST bdev_nbd 00:31:15.003 ************************************ 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=1048306 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 1048306 /var/tmp/spdk-nbd.sock 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 1048306 ']' 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:15.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:15.003 13:31:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:15.003 [2024-07-25 13:31:25.343596] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:31:15.003 [2024-07-25 13:31:25.343656] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.003 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:15.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.003 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:15.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.003 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:15.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.003 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:15.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.003 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:15.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.003 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:15.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.003 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:15.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.003 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:15.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.003 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:15.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.003 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:15.003 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:15.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:15.004 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:15.004 [2024-07-25 13:31:25.476986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.263 [2024-07-25 13:31:25.559425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.263 [2024-07-25 13:31:25.580664] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:15.263 [2024-07-25 13:31:25.588686] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:15.263 [2024-07-25 13:31:25.596703] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:15.263 [2024-07-25 13:31:25.701809] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:17.797 [2024-07-25 13:31:27.883419] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:17.797 [2024-07-25 13:31:27.883479] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:17.797 [2024-07-25 13:31:27.883493] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:17.797 [2024-07-25 13:31:27.891437] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:17.797 [2024-07-25 13:31:27.891456] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:17.797 [2024-07-25 13:31:27.891472] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:17.797 [2024-07-25 13:31:27.899457] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:17.797 [2024-07-25 13:31:27.899475] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:17.797 [2024-07-25 13:31:27.899485] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:17.797 [2024-07-25 13:31:27.907477] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:17.797 [2024-07-25 13:31:27.907503] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:17.797 [2024-07-25 13:31:27.907514] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:17.797 1+0 records in 00:31:17.797 1+0 records out 00:31:17.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292318 s, 14.0 MB/s 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:17.797 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:31:18.056 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:31:18.056 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:31:18.056 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:31:18.056 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:18.056 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:18.056 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:18.056 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:18.056 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:18.056 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:18.056 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:18.056 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:18.057 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:18.057 1+0 records in 00:31:18.057 1+0 records out 00:31:18.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031748 s, 12.9 MB/s 00:31:18.057 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:18.057 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:18.057 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:18.057 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:18.057 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:18.057 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:18.057 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:18.057 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:18.316 1+0 records in 00:31:18.316 1+0 records out 00:31:18.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293057 s, 14.0 MB/s 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:18.316 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:18.574 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:18.574 13:31:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:18.574 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:18.574 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:18.574 13:31:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:31:18.575 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:31:18.575 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:31:18.575 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:31:18.575 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:31:18.575 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:18.575 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:18.575 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:18.575 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:31:18.575 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:18.575 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:18.575 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:18.575 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:18.833 1+0 records in 00:31:18.833 1+0 records out 00:31:18.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400948 s, 10.2 MB/s 00:31:18.833 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:18.833 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:18.833 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:18.833 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:18.833 13:31:29 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:18.833 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:18.833 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:18.833 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:18.833 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:18.833 { 00:31:18.833 "nbd_device": "/dev/nbd0", 00:31:18.833 "bdev_name": "crypto_ram" 00:31:18.833 }, 00:31:18.833 { 00:31:18.833 "nbd_device": "/dev/nbd1", 00:31:18.833 "bdev_name": "crypto_ram2" 00:31:18.833 }, 00:31:18.833 { 00:31:18.833 "nbd_device": "/dev/nbd2", 00:31:18.833 "bdev_name": "crypto_ram3" 00:31:18.833 }, 00:31:18.833 { 00:31:18.833 "nbd_device": "/dev/nbd3", 00:31:18.833 "bdev_name": "crypto_ram4" 00:31:18.833 } 00:31:18.833 ]' 00:31:18.833 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:18.833 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:18.833 { 00:31:18.833 "nbd_device": "/dev/nbd0", 00:31:18.833 "bdev_name": "crypto_ram" 00:31:18.833 }, 00:31:18.833 { 00:31:18.833 "nbd_device": "/dev/nbd1", 00:31:18.833 "bdev_name": "crypto_ram2" 00:31:18.833 }, 00:31:18.833 { 00:31:18.833 "nbd_device": "/dev/nbd2", 00:31:18.833 "bdev_name": "crypto_ram3" 00:31:18.833 }, 00:31:18.833 { 00:31:18.833 "nbd_device": "/dev/nbd3", 00:31:18.833 "bdev_name": "crypto_ram4" 00:31:18.833 } 00:31:18.833 ]' 00:31:18.833 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:19.092 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:31:19.092 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:19.092 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:31:19.092 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:19.092 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:19.092 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:19.092 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:19.350 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:19.350 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:19.350 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:19.350 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:19.350 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:19.350 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:19.350 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:19.350 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:19.350 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:19.350 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:19.351 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:19.351 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:19.351 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:19.351 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:19.351 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:19.351 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:19.609 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:19.609 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:19.609 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:19.609 13:31:29 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:31:19.609 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:31:19.609 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:31:19.609 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:31:19.609 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:19.609 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:19.609 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:19.868 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:20.126 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:20.126 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:20.126 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:20.126 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:20.126 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:20.126 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:31:20.385 /dev/nbd0 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:20.385 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:20.644 1+0 records in 00:31:20.644 1+0 records out 00:31:20.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281566 s, 14.5 MB/s 00:31:20.644 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:20.644 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:20.644 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:20.644 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:20.644 13:31:30 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:20.644 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:20.644 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:20.645 13:31:30 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:31:20.645 /dev/nbd1 00:31:20.645 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:20.645 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:20.645 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:20.645 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:20.645 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:20.906 1+0 records in 00:31:20.906 1+0 records out 00:31:20.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313161 s, 13.1 MB/s 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:31:20.906 /dev/nbd10 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:20.906 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:21.165 1+0 records in 00:31:21.165 1+0 records out 00:31:21.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311221 s, 13.2 MB/s 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:31:21.165 /dev/nbd11 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:21.165 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:21.424 1+0 records in 00:31:21.424 1+0 records out 00:31:21.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353636 s, 11.6 MB/s 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:21.424 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:21.424 { 00:31:21.424 "nbd_device": "/dev/nbd0", 00:31:21.424 "bdev_name": "crypto_ram" 00:31:21.424 }, 00:31:21.424 { 00:31:21.424 "nbd_device": "/dev/nbd1", 00:31:21.424 "bdev_name": "crypto_ram2" 00:31:21.424 }, 00:31:21.424 { 00:31:21.424 "nbd_device": "/dev/nbd10", 00:31:21.424 "bdev_name": "crypto_ram3" 00:31:21.425 }, 00:31:21.425 { 00:31:21.425 "nbd_device": "/dev/nbd11", 00:31:21.425 "bdev_name": "crypto_ram4" 00:31:21.425 } 00:31:21.425 ]' 00:31:21.425 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:21.425 { 00:31:21.425 "nbd_device": "/dev/nbd0", 00:31:21.425 "bdev_name": "crypto_ram" 00:31:21.425 }, 00:31:21.425 { 00:31:21.425 "nbd_device": "/dev/nbd1", 00:31:21.425 "bdev_name": "crypto_ram2" 00:31:21.425 }, 00:31:21.425 { 00:31:21.425 "nbd_device": "/dev/nbd10", 00:31:21.425 "bdev_name": "crypto_ram3" 00:31:21.425 }, 00:31:21.425 { 00:31:21.425 "nbd_device": "/dev/nbd11", 00:31:21.425 "bdev_name": "crypto_ram4" 00:31:21.425 } 00:31:21.425 ]' 00:31:21.425 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:21.684 /dev/nbd1 00:31:21.684 /dev/nbd10 00:31:21.684 /dev/nbd11' 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:21.684 /dev/nbd1 00:31:21.684 /dev/nbd10 00:31:21.684 /dev/nbd11' 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:21.684 256+0 records in 00:31:21.684 256+0 records out 00:31:21.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110772 s, 94.7 MB/s 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:21.684 13:31:31 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:21.684 256+0 records in 00:31:21.684 256+0 records out 00:31:21.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0559811 s, 18.7 MB/s 00:31:21.684 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:21.684 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:21.684 256+0 records in 00:31:21.684 256+0 records out 00:31:21.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0414702 s, 25.3 MB/s 00:31:21.684 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:21.684 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:31:21.684 256+0 records in 00:31:21.684 256+0 records out 00:31:21.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0548657 s, 19.1 MB/s 00:31:21.684 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:21.684 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:31:21.944 256+0 records in 00:31:21.944 256+0 records out 00:31:21.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0436668 s, 24.0 MB/s 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:21.944 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:22.204 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:22.204 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:22.204 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:22.204 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:22.204 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:22.204 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:22.204 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:22.204 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:22.204 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:22.204 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:22.462 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:22.462 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:22.462 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:22.462 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:22.463 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:22.463 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:22.463 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:22.463 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:22.463 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:22.463 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:31:22.721 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:31:22.721 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:31:22.721 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:31:22.721 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:22.721 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:22.721 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:31:22.721 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:22.721 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:22.721 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:22.721 13:31:32 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:31:22.721 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:31:22.721 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:31:22.721 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:31:22.721 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:22.721 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:22.721 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:31:22.721 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:22.721 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:22.721 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:22.979 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:22.979 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:22.979 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:22.979 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:22.979 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:23.236 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:23.236 malloc_lvol_verify 00:31:23.494 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:23.494 9121cd41-3687-4834-825c-be151d79a180 00:31:23.494 13:31:33 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:23.753 a4ef3012-5927-4e6d-99cd-2bb242f1cf6f 00:31:23.753 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:24.012 /dev/nbd0 00:31:24.012 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:24.012 mke2fs 1.46.5 (30-Dec-2021) 00:31:24.012 Discarding device blocks: 0/4096 done 00:31:24.012 Creating filesystem with 4096 1k blocks and 1024 inodes 00:31:24.012 00:31:24.012 Allocating group tables: 0/1 done 00:31:24.012 Writing inode tables: 0/1 done 00:31:24.012 Creating journal (1024 blocks): done 00:31:24.012 Writing superblocks and filesystem accounting information: 0/1 done 00:31:24.012 00:31:24.012 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:24.012 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:24.012 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:24.012 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:24.012 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:24.012 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:24.012 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:24.012 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:24.271 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:24.271 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:24.271 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 1048306 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 1048306 ']' 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 1048306 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1048306 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1048306' 00:31:24.272 killing process with pid 1048306 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@969 -- # kill 1048306 00:31:24.272 13:31:34 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@974 -- # wait 1048306 00:31:24.840 13:31:35 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:31:24.840 00:31:24.840 real 0m9.782s 00:31:24.840 user 0m12.712s 00:31:24.840 sys 0m3.877s 00:31:24.840 13:31:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:24.840 13:31:35 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:24.840 ************************************ 00:31:24.840 END TEST bdev_nbd 00:31:24.840 ************************************ 00:31:24.840 13:31:35 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:31:24.840 13:31:35 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = nvme ']' 00:31:24.840 13:31:35 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = gpt ']' 00:31:24.840 13:31:35 blockdev_crypto_aesni -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:31:24.840 13:31:35 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:24.840 13:31:35 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:24.840 13:31:35 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:24.840 ************************************ 00:31:24.840 START TEST bdev_fio 00:31:24.840 ************************************ 00:31:24.840 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:31:24.840 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:31:24.840 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:31:24.840 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:31:24.840 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:31:24.840 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:31:24.840 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:31:24.840 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:31:24.840 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram4]' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram4 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:24.841 ************************************ 00:31:24.841 START TEST bdev_fio_rw_verify 00:31:24.841 ************************************ 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:24.841 13:31:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:25.492 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:25.492 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:25.492 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:25.492 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:25.492 fio-3.35 00:31:25.492 Starting 4 threads 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:25.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:25.492 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:40.374 00:31:40.374 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1050747: Thu Jul 25 13:31:48 2024 00:31:40.374 read: IOPS=24.1k, BW=94.1MiB/s (98.7MB/s)(941MiB/10001msec) 00:31:40.374 slat (usec): min=15, max=1464, avg=54.48, stdev=28.55 00:31:40.374 clat (usec): min=12, max=2449, avg=301.87, stdev=186.97 00:31:40.374 lat (usec): min=40, max=2569, avg=356.35, stdev=204.14 00:31:40.374 clat percentiles (usec): 00:31:40.374 | 50.000th=[ 258], 99.000th=[ 914], 99.900th=[ 1074], 99.990th=[ 1156], 00:31:40.374 | 99.999th=[ 2212] 00:31:40.374 write: IOPS=26.6k, BW=104MiB/s (109MB/s)(1011MiB/9719msec); 0 zone resets 00:31:40.374 slat (usec): min=21, max=441, avg=65.49, stdev=27.98 00:31:40.374 clat (usec): min=32, max=1942, avg=363.14, stdev=218.80 00:31:40.374 lat (usec): min=74, max=2094, avg=428.63, stdev=235.47 00:31:40.374 clat percentiles (usec): 00:31:40.374 | 50.000th=[ 322], 99.000th=[ 1090], 99.900th=[ 1336], 99.990th=[ 1450], 00:31:40.374 | 99.999th=[ 1762] 00:31:40.374 bw ( KiB/s): min=79440, max=138952, per=97.49%, avg=103881.68, stdev=4841.97, samples=76 00:31:40.374 iops : min=19860, max=34738, avg=25970.42, stdev=1210.49, samples=76 00:31:40.374 lat (usec) : 20=0.01%, 50=0.01%, 100=6.56%, 250=34.66%, 500=41.40% 00:31:40.374 lat (usec) : 750=12.29%, 1000=4.06% 00:31:40.374 lat (msec) : 2=1.02%, 4=0.01% 00:31:40.374 cpu : usr=99.63%, sys=0.00%, ctx=78, majf=0, minf=293 00:31:40.374 IO depths : 1=10.1%, 2=25.6%, 4=51.2%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.374 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.374 issued rwts: total=240967,258897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.374 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:40.374 00:31:40.374 Run status group 0 (all jobs): 00:31:40.374 READ: bw=94.1MiB/s (98.7MB/s), 94.1MiB/s-94.1MiB/s (98.7MB/s-98.7MB/s), io=941MiB (987MB), run=10001-10001msec 00:31:40.374 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=1011MiB (1060MB), run=9719-9719msec 00:31:40.374 00:31:40.374 real 0m13.495s 00:31:40.374 user 0m53.280s 00:31:40.374 sys 0m0.514s 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:31:40.374 ************************************ 00:31:40.374 END TEST bdev_fio_rw_verify 00:31:40.374 ************************************ 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "732d8d70-c33d-5452-9efb-951aa993bf1e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "732d8d70-c33d-5452-9efb-951aa993bf1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "2e03ffbf-8458-5343-af47-b12969acca36"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2e03ffbf-8458-5343-af47-b12969acca36",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "9bdd9a5c-dfcb-5022-a1dd-64e64d55b086"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "9bdd9a5c-dfcb-5022-a1dd-64e64d55b086",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "b7fa5bb6-18ab-5544-b85d-358c4751f5de"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "b7fa5bb6-18ab-5544-b85d-358c4751f5de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:31:40.374 crypto_ram2 00:31:40.374 crypto_ram3 00:31:40.374 crypto_ram4 ]] 00:31:40.374 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "732d8d70-c33d-5452-9efb-951aa993bf1e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "732d8d70-c33d-5452-9efb-951aa993bf1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "2e03ffbf-8458-5343-af47-b12969acca36"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2e03ffbf-8458-5343-af47-b12969acca36",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "9bdd9a5c-dfcb-5022-a1dd-64e64d55b086"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "9bdd9a5c-dfcb-5022-a1dd-64e64d55b086",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "b7fa5bb6-18ab-5544-b85d-358c4751f5de"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "b7fa5bb6-18ab-5544-b85d-358c4751f5de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram4]' 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram4 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:40.375 ************************************ 00:31:40.375 START TEST bdev_fio_trim 00:31:40.375 ************************************ 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:40.375 13:31:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:40.375 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:40.375 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:40.375 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:40.375 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:40.375 fio-3.35 00:31:40.375 Starting 4 threads 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:40.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.375 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:40.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:40.376 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:52.588 00:31:52.588 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1053194: Thu Jul 25 13:32:02 2024 00:31:52.588 write: IOPS=41.6k, BW=163MiB/s (170MB/s)(1626MiB/10001msec); 0 zone resets 00:31:52.588 slat (usec): min=16, max=429, avg=54.63, stdev=32.53 00:31:52.588 clat (usec): min=47, max=1895, avg=245.59, stdev=160.06 00:31:52.588 lat (usec): min=70, max=2003, avg=300.21, stdev=181.14 00:31:52.588 clat percentiles (usec): 00:31:52.588 | 50.000th=[ 202], 99.000th=[ 816], 99.900th=[ 979], 99.990th=[ 1074], 00:31:52.588 | 99.999th=[ 1500] 00:31:52.588 bw ( KiB/s): min=148424, max=215272, per=100.00%, avg=167497.26, stdev=7189.03, samples=76 00:31:52.588 iops : min=37106, max=53818, avg=41874.32, stdev=1797.26, samples=76 00:31:52.588 trim: IOPS=41.6k, BW=163MiB/s (170MB/s)(1626MiB/10001msec); 0 zone resets 00:31:52.588 slat (usec): min=6, max=1210, avg=14.81, stdev= 6.32 00:31:52.588 clat (usec): min=59, max=1684, avg=231.63, stdev=106.61 00:31:52.588 lat (usec): min=70, max=1697, avg=246.44, stdev=108.67 00:31:52.588 clat percentiles (usec): 00:31:52.588 | 50.000th=[ 215], 99.000th=[ 578], 99.900th=[ 685], 99.990th=[ 750], 00:31:52.588 | 99.999th=[ 1074] 00:31:52.588 bw ( KiB/s): min=148464, max=215272, per=100.00%, avg=167498.53, stdev=7189.43, samples=76 00:31:52.588 iops : min=37118, max=53818, avg=41874.63, stdev=1797.36, samples=76 00:31:52.589 lat (usec) : 50=0.01%, 100=9.35%, 250=55.17%, 500=30.23%, 750=4.37% 00:31:52.589 lat (usec) : 1000=0.85% 00:31:52.589 lat (msec) : 2=0.03% 00:31:52.589 cpu : usr=99.63%, sys=0.00%, ctx=86, majf=0, minf=95 00:31:52.589 IO depths : 1=8.0%, 2=26.3%, 4=52.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.589 complete : 0=0.0%, 4=88.4%, 8=11.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.589 issued rwts: total=0,416272,416272,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.589 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:52.589 00:31:52.589 Run status group 0 (all jobs): 00:31:52.589 WRITE: bw=163MiB/s (170MB/s), 163MiB/s-163MiB/s (170MB/s-170MB/s), io=1626MiB (1705MB), run=10001-10001msec 00:31:52.589 TRIM: bw=163MiB/s (170MB/s), 163MiB/s-163MiB/s (170MB/s-170MB/s), io=1626MiB (1705MB), run=10001-10001msec 00:31:52.589 00:31:52.589 real 0m13.463s 00:31:52.589 user 0m54.777s 00:31:52.589 sys 0m0.488s 00:31:52.589 13:32:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:52.589 13:32:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:31:52.589 ************************************ 00:31:52.589 END TEST bdev_fio_trim 00:31:52.589 ************************************ 00:31:52.589 13:32:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:31:52.589 13:32:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:52.589 13:32:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:31:52.589 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:31:52.589 13:32:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:31:52.589 00:31:52.589 real 0m27.297s 00:31:52.589 user 1m48.211s 00:31:52.589 sys 0m1.208s 00:31:52.589 13:32:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:52.589 13:32:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:52.589 ************************************ 00:31:52.589 END TEST bdev_fio 00:31:52.589 ************************************ 00:31:52.589 13:32:02 blockdev_crypto_aesni -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:52.589 13:32:02 blockdev_crypto_aesni -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:52.589 13:32:02 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:31:52.589 13:32:02 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:52.589 13:32:02 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:52.589 ************************************ 00:31:52.589 START TEST bdev_verify 00:31:52.589 ************************************ 00:31:52.589 13:32:02 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:52.589 [2024-07-25 13:32:02.563589] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:31:52.589 [2024-07-25 13:32:02.563629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054869 ] 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:52.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:52.589 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:52.589 [2024-07-25 13:32:02.681252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:52.589 [2024-07-25 13:32:02.772162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:52.589 [2024-07-25 13:32:02.772168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.589 [2024-07-25 13:32:02.793487] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:52.589 [2024-07-25 13:32:02.801516] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:52.589 [2024-07-25 13:32:02.809537] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:52.589 [2024-07-25 13:32:02.916173] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:55.126 [2024-07-25 13:32:05.086533] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:55.126 [2024-07-25 13:32:05.086618] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:55.126 [2024-07-25 13:32:05.086632] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:55.126 [2024-07-25 13:32:05.094548] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:55.126 [2024-07-25 13:32:05.094566] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:55.126 [2024-07-25 13:32:05.094576] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:55.126 [2024-07-25 13:32:05.102568] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:55.126 [2024-07-25 13:32:05.102585] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:55.126 [2024-07-25 13:32:05.102595] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:55.126 [2024-07-25 13:32:05.110593] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:55.126 [2024-07-25 13:32:05.110609] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:55.126 [2024-07-25 13:32:05.110619] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:55.126 Running I/O for 5 seconds... 00:32:00.400 00:32:00.400 Latency(us) 00:32:00.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.400 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:00.400 Verification LBA range: start 0x0 length 0x1000 00:32:00.400 crypto_ram : 5.07 530.36 2.07 0.00 0.00 240782.45 4377.80 156028.11 00:32:00.400 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:00.400 Verification LBA range: start 0x1000 length 0x1000 00:32:00.400 crypto_ram : 5.07 530.69 2.07 0.00 0.00 240571.14 11219.76 155189.25 00:32:00.400 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:00.400 Verification LBA range: start 0x0 length 0x1000 00:32:00.400 crypto_ram2 : 5.07 530.26 2.07 0.00 0.00 240076.80 4456.45 147639.50 00:32:00.400 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:00.400 Verification LBA range: start 0x1000 length 0x1000 00:32:00.400 crypto_ram2 : 5.07 530.59 2.07 0.00 0.00 239864.57 8808.04 146800.64 00:32:00.400 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:00.400 Verification LBA range: start 0x0 length 0x1000 00:32:00.400 crypto_ram3 : 5.06 4124.39 16.11 0.00 0.00 30754.46 4902.09 27682.41 00:32:00.400 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:00.400 Verification LBA range: start 0x1000 length 0x1000 00:32:00.400 crypto_ram3 : 5.05 4153.37 16.22 0.00 0.00 30540.08 7077.89 27472.69 00:32:00.400 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:00.400 Verification LBA range: start 0x0 length 0x1000 00:32:00.400 crypto_ram4 : 5.06 4124.93 16.11 0.00 0.00 30660.05 4980.74 24222.11 00:32:00.400 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:00.400 Verification LBA range: start 0x1000 length 0x1000 00:32:00.400 crypto_ram4 : 5.06 4161.43 16.26 0.00 0.00 30394.75 697.96 23802.68 00:32:00.400 =================================================================================================================== 00:32:00.400 Total : 18686.03 72.99 0.00 0.00 54446.69 697.96 156028.11 00:32:00.400 00:32:00.400 real 0m8.108s 00:32:00.400 user 0m15.455s 00:32:00.400 sys 0m0.319s 00:32:00.400 13:32:10 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:00.400 13:32:10 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:32:00.400 ************************************ 00:32:00.400 END TEST bdev_verify 00:32:00.400 ************************************ 00:32:00.400 13:32:10 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:00.400 13:32:10 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:32:00.400 13:32:10 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:00.400 13:32:10 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:00.400 ************************************ 00:32:00.400 START TEST bdev_verify_big_io 00:32:00.400 ************************************ 00:32:00.400 13:32:10 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:00.400 [2024-07-25 13:32:10.758002] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:32:00.400 [2024-07-25 13:32:10.758053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056201 ] 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:00.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.400 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:00.659 [2024-07-25 13:32:10.889042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:00.659 [2024-07-25 13:32:10.973807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.659 [2024-07-25 13:32:10.973812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.659 [2024-07-25 13:32:10.995273] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:00.659 [2024-07-25 13:32:11.003303] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:00.659 [2024-07-25 13:32:11.011323] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:00.659 [2024-07-25 13:32:11.108441] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:32:03.266 [2024-07-25 13:32:13.276573] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:32:03.266 [2024-07-25 13:32:13.276647] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:03.266 [2024-07-25 13:32:13.276660] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:03.266 [2024-07-25 13:32:13.284589] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:32:03.266 [2024-07-25 13:32:13.284607] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:03.266 [2024-07-25 13:32:13.284618] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:03.266 [2024-07-25 13:32:13.292612] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:32:03.266 [2024-07-25 13:32:13.292628] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:03.266 [2024-07-25 13:32:13.292638] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:03.266 [2024-07-25 13:32:13.300634] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:32:03.266 [2024-07-25 13:32:13.300650] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:03.266 [2024-07-25 13:32:13.300661] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:03.266 Running I/O for 5 seconds... 00:32:05.803 [2024-07-25 13:32:15.971068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.972450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.974050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.975670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.977920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.979282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.980897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.982518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.984655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.986312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.988054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.989733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.992291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.993920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.995525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.996688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.998341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:15.999949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.001569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.002264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.005074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.006681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.008319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.008898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.010953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.012565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.014003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.014364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.017057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.018677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.019632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.021172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.023151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.024770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.025214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.025583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.028375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.029977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.030889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.032224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.034163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.035289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.035650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.036010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.038677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.039631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.041172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.042529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.044467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.044965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.045327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.045910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.048562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.049263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.050600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.052215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.053932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.054303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.054660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.056077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.057948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.059722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.061367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.063107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.063785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.064163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.065043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.066361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.068263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.069609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.071207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.072805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.073536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.073914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.075608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.077123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.079926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.081582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.083349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.085025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.085790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.086675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.088015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.803 [2024-07-25 13:32:16.089624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.091993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.093620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.095238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.096032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.096907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.098704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.100336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.102067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.104911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.106577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.108091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.108450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.109813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.111168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.112779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.114396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.117104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.118723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.119325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.119680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.121743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.123501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.125284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.125947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.127291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.127654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.128058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.129550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.130236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.131743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.133205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.133561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.136127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.136751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.138231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.139511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.140308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.141256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.142735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.143670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.145066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.145443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.145487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.146889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.147916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.149313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.149365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.150953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.152529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.153902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.153948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.155388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.155832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.157056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.157103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.158178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.159364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.160597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.160644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.161297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.161662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.163067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.163114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.163480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.164816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.166595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.166649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.167420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.167796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.169167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.169215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.169570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.170763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.171732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.171780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.173327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.173730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.174307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.174355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.174708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.176271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.176326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.177784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.177849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.178533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.178584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.178937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.178978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.180786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.180840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.804 [2024-07-25 13:32:16.182064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.182107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.182818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.182870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.183235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.183282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.185889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.185950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.187499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.187550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.188351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.188403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.189253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.189295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.191663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.191717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.192950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.192999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.193809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.193861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.195503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.195552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.197807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.197861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.198289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.198335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.199236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.199289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.200509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.200554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.203402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.203464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.203821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.203859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.204757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.204809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.206281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.206325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.208486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.208539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.208908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.208949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.209783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.209836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.210217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.210274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.212194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.212253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.212607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.212645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.213552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.213605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.213963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.214007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.215747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.215801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.216166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.216207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.217065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.217118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.217482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.217527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.219100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.219168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.219523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.219562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.219581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.219943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.220405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.220458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.220817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.220861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.220879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.221283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.222249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.222616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.222662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.223018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.223403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.223547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.223910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.223955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.224324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.224684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.225657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.225705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.225744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.225782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.805 [2024-07-25 13:32:16.226121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.226271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.226316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.226354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.226393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.226702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.227751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.227801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.227861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.227913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.228244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.228380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.228422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.228462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.228501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.228943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.229853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.229901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.229940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.229979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.230411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.230551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.230594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.230632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.230670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.230923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.231966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.232014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.232052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.232089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.232417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.232555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.232598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.232637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.232675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.233008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.233981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.234030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.234074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.234113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.234526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.234672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.234718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.234781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.234833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.235149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.236334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.236382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.236421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.236460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.236743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.236886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.236929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.236967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.237005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.237323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.238246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.238295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.238335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.238384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.238622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.238769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.238825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.238867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.238905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.239149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.239994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.240041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.240080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.240118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.240437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.240588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.240630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.240669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.240707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.241007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.241880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.241931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.241969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.242007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.242273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.242412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.242462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.242501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.242551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.242788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.243669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.243717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.243756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.243793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.806 [2024-07-25 13:32:16.244093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.244238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.244281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.244319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.244357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.244656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.245487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.245549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.245588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.245634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.245871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.246010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.246052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.246089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.246127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.246410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.247357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.247404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.247454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.247504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.247742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.247880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.247944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.247982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.248020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:05.807 [2024-07-25 13:32:16.248267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.067 [2024-07-25 13:32:16.395839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.067 [2024-07-25 13:32:16.397191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.067 [2024-07-25 13:32:16.398545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.067 [2024-07-25 13:32:16.400148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.067 [2024-07-25 13:32:16.400817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.067 [2024-07-25 13:32:16.401204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.402906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.404446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.407189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.408824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.410314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.411917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.412671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.413554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.414890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.416254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.418623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.419997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.421600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.422757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.423620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.425345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.426918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.428350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.430888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.432354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.433956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.434327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.435552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.436889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.438249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.439851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.442223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.443845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.444860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.445221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.447467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.449179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.450729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.452351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.454892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.456580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.456953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.457312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.458963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.460339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.461936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.463032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.465924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.466402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.466759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.467608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.469385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.470998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.472392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.473651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.476041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.476415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.476784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.478479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.480087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.480878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.482531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.484223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.486071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.487795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.487848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.489494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.491594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.493254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.493306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.493662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.494780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.496239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.496282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.497312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.497725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.498792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.498837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.499193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.500426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.501131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.501189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.068 [2024-07-25 13:32:16.502918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.503290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.503655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.503696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.504049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.505117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.505741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.505788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.507003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.507373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.507737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.507790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.508150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.509279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.510981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.511032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.512393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.512841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.513214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.513255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.514220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.515378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.516424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.516471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.517821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.518314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.518678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.518718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.520482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.521566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.522628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.522675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.523274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.523755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.524424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.524473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.525450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.526562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.528258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.528309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.528664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.529178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.530669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.530711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.531854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.533017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.533895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.533943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.534301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.534783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.536048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.536108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.537679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.538755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.539128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.539180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.539536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.539902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.540883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.540929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.541729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.542851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.543237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.543280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.543713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.544076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.545688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.545759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.546597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.547761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.548126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.548177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.549431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.549902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.550988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.551035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.552730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.553987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.069 [2024-07-25 13:32:16.554364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.554408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.555795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.556174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.556795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.556840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.557893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.559098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.559790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.559836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.561054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.561425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.562101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.331 [2024-07-25 13:32:16.562154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.563364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.564579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.564947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.564993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.565359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.565822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.566195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.566237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.566593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.567882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.568264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.568313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.568670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.569127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.569498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.569540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.569897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.571302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.571670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.571722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.572080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.572650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.573016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.573385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.573431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.574839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.575216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.575279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.575639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.576448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.576500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.576858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.576910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.578508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.578875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.578937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.579311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.580077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.580130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.580495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.580547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.581924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.582298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.582351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.582712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.583497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.583549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.583907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.583951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.585285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.585660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.585701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.586061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.586804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.586856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.587223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.587280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.588552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.588918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.588958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.589335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.590077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.590127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.590503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.590541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.591824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.592205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.592254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.593889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.593911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.594183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.594873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.594929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.596459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.596512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.596533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.596769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.598387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.598442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.599650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.599694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.599913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:06.332 [2024-07-25 13:32:16.600064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:06.332 [2024-07-25 13:32:16.601337] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:06.332 [2024-07-25 13:32:16.601387] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:06.332 [2024-07-25 13:32:16.601431] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:32:06.332 [2024-07-25 13:32:16.602680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.602729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.602766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.602804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.603096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.332 [2024-07-25 13:32:16.603244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.603285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.603323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.603361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.604493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.604546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.604584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.604629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.604871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.605011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.605060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.605111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.605158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.606345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.606393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.606431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.606469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.606747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.606891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.606933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.606972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.607027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.608200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.608260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.608310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.608349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.608691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.608828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.608870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.608911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.608953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.610057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.610116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.610164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.610203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.610535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.610673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.610715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.610753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.610790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.611964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.612013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.612052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.612090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.612389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.612529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.612571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.612609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.612647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.613712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.613761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.613799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.613837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.614074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.614224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.614267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.614310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.614351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.615498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.615545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.615584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.615621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.616069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.616218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.616270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.616320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.619020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.619083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.620789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.621267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.621611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.621672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.622589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.623691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.625178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.625225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.626090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.626650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.627002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.627068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.628777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.629960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.631580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.631636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.632668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.633058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.634408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.634472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.634812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.635856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.636826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.333 [2024-07-25 13:32:16.636873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.638343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.638805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.640318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.640380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.641182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.642341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.644093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.644151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.644658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.645038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.646382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.646443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.647962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.649169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.650402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.650448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.651911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.652407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.654017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.654097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.655798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.656986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.657841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.657886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.658700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.659082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.660127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.660196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.661352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.662372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.662751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.662795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.664198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.664695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.666459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.666536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.668066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.669069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.669828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.669873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.670796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.671198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.671805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.671867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.672913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.673991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.675597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.675641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.676279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.676678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.677221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.677286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.678388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.681614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.682799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.682847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.684488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.685054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.686372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.686433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.687148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.688257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.689868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.689912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.690777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.691165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.692350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.692414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.692898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.693950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.695571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.695615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.696395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.696776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.698491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.698562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.700238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.703024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.704193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.704237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.705575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.706011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.707607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.707670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.708277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.712497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.713513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.713555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.714918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.715393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.716795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.716858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.334 [2024-07-25 13:32:16.718277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.722844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.724617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.724664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.726117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.726509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.726862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.726928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.728446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.732550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.733205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.733249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.734566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.734936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.734985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.735031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.736619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.736660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.736950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.741432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.743047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.743096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.744660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.746662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.746720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.747809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.747851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.748126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.751760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.752154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.752199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.753500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.755370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.755422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.756795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.756840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.757077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.760893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.762516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.762560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.763387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.764241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.764292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.765610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.765651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.765890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.769064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.770439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.770483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.772088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.772838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.772888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.773382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.773424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.773665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.777900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.779236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.779279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.780640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.782217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.782271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.783882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.783922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.784290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.787720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.789343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.789391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.789428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.791216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.791268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.792625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.794230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.794474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.797750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.799234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.800845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.800932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.804917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.806292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.807899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.808812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.809099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.809706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.811001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.811586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.335 [2024-07-25 13:32:16.811830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.817313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.818736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.820342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.821778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.822698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.824322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.824686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.826319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.826628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.831242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.832605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.834216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.835114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.836268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.837272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.838184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.839518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.839773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.841485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.842836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.844207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.845811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.596 [2024-07-25 13:32:16.847831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.848219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.849737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.850092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.850342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.852761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.853387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.854867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.856507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.858452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.859642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.860339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.861781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.862152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.866684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.868053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.869396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.870765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.872007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.873452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.874173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.875337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.875649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.880064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.880846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.881999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.882998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.884090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.885158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.886692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.887699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.887978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.892532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.893561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.894545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.895911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.897222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.898668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.900032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.900554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.900801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.904296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.905290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.906500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.906545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.907237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.908915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.909275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.909319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.909564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.913557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.913611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.915195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.915236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.916775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.916826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.918131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.918178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.918608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.923044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.923098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.924196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.924254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.924937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.924990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.926343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.926383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.926671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.931111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.931170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.931980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.932021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.933458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.933511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.934083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.934122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.597 [2024-07-25 13:32:16.934445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.938529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.938583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.938939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.938982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.940901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.940954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.942344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.942388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.942715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.946471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.946526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.947539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.947579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.949243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.949297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.950602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.950642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.950925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.955494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.955547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.955909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.955949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.957532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.957585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.958149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.958193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.958442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.963652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.963711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.964131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.964180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.966244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.966298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.967062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.967109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.967407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.971122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.971179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.972457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.972501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.973717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.973778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.975466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.975515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.975762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.978242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.978296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.979734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.979787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.980654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.980706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.981931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.981975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.982226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.986942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.986995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.987556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.987601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.988632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.988703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.990297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.990337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.990682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.995374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.995432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.996765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.996810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.997880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.997933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.998825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.998866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:16.999114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:17.000789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:17.000843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:17.001918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:17.001959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:17.003523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:17.003577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.598 [2024-07-25 13:32:17.004068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.004107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.004363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.006775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.006828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.008374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.008416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.009277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.009331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.009689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.009732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.009983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.012164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.012223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.013519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.013561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.015568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.015633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.015989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.016041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.016385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.018643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.018695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.019053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.019096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.020877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.020930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.021296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.021341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.021587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.023896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.023948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.025594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.025644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.026519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.026572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.026930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.028578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.029005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.032975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.033028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.033614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.033654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.034359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.034410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.034772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.034815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.035151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.039321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.039377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.039733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.039777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.040616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.040679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.042393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.042433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.042791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.046618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.046672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.047031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.047070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.047862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.047916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.049523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.049573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.050009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.054406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.054459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.054885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.054942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.055629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.055695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.056056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.056101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.056490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.061426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.599 [2024-07-25 13:32:17.061481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.062366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.062424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.063684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.063737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.065064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.065104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.065400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.069562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.069617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.071061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.071114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.072593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.072647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.073117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.073173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.073420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.076892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.078148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.079121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.079171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.079958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.080009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.080048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.080085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.600 [2024-07-25 13:32:17.080399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.084821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.084881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.084922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.084960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.086725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.086777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.086827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.086866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.087251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.091214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.091262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.091308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.091346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.091751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.091794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.091832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.091869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.092163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.094337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.094386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.094424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.094461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.094840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.094886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.094924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.094962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.095212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.097432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.097481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.097529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.097572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.097939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.097983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.098021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.098059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.098310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.100857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.100915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.100953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.100993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.101369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.101425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.861 [2024-07-25 13:32:17.101466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.101504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.101746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.103565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.103627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.103668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.103705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.104071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.104115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.104160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.104198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.104440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.108227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.108275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.108313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.108350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.108760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.108805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.108844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.108882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.109124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.112224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.112273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.112310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.112347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.112784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.112832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.112870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.112907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.113276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.116537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.116586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.116623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.117366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.117737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.117782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.117836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.119587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.119939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.123269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.124290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.124336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.125642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.126009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.126987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.127034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.127953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.128204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.129829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.131011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.131057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.131540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.131904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.133459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.133513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.134457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.134740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.137082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.138601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.138648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.139890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.140374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.141505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.141552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.143267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.143672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.148246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.148940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.148986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.150480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.150879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.151626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.151674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.152596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.152847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.155786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.156821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.156865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.157831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.158205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.159821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.159871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.160263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.160508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.163060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.163682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.163726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.165063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.165438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.167113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.167163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.168564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.862 [2024-07-25 13:32:17.168912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.172351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.173305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.173351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.174707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.175074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.176344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.176388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.176818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.177071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.179805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.180953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.181000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.182375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.182809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.184246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.184289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.185162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.185412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.188581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.190082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.190125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.191525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.191952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.193393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.193443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.195031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.195426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.199211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.200550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.200593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.201953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.202320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.203437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.203480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.205250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.205496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.208504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.209262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.209306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.210244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.210609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.211951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.211995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.213361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.213606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.216915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.218536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.218581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.219403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.219772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.220865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.220909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.221500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.221746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.225507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.227137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.227192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.228862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.229265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.230950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.231001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.232237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.232569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.235539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.236924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.236969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.238579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.239070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.240559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.240604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.242040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.242289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.245895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.246458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.246501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.248123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.248544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.249907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.249952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.251596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.251956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.255752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.256993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.257038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.257470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.257843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.258220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.258266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.258304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.258547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.863 [2024-07-25 13:32:17.262772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.264124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.264171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.265508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.265872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.266985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.267039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.268616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.269000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.273400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.275142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.275192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.276670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.277039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.278388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.278432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.279793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.280037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.282144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.282638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.282680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.284019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.284391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.286011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.286055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.287194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.287441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.291288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.292873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.292916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.293672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.294038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.294901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.294946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.296293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.296582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.300201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.301569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.301614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.303221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.303682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.304867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.304909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.306012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.306346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.309397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.310625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.310668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.312317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.312730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.314105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.314156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.315848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.316212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.319352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.319404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.319443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.320774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.321186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.322795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.323411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.324921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.325173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.328192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.329216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.330074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.331106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.331509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.332867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.334473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.335447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.335696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.341192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.341705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.343077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.343636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:06.864 [2024-07-25 13:32:17.345636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.347383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.348988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.350024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.350318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.354640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.356159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.356536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.358072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.359701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.360609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.362364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.364007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.364300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.366656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.368378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.369851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.370253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.371968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.372787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.373885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.374913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.375227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.125 [2024-07-25 13:32:17.379674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.381411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.382540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.383281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.384309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.385233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.386464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.387669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.387956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.392672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.393069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.394236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.395487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.396937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.398182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.399175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.400864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.401275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.406461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.408124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.409492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.410189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.411108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.412350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.413031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.414232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.414479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.418563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.419477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.420439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.421380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.423387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.424369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.425353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.426772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.427044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.430487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.431840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.433058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.433102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.434702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.436354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.436720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.436766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.437015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.440964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.441024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.442464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.442508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.444161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.444212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.445341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.445383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.445739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.450170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.450224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.451410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.451456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.452156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.452211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.453756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.453797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.454134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.458493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.458547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.459192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.459237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.460432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.460484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.461255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.461297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.461545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.465421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.465495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.467100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.467148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.468898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.468949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.469786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.469828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.470135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.474742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.474796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.475422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.475463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.476198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.476252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.478034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.478077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.478404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.482650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.482704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.126 [2024-07-25 13:32:17.483367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.483409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.484099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.484159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.485505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.485557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.485894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.490854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.490908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.491929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.491972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.494036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.494089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.495055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.495096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.495361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.498464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.498518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.499883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.499924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.502013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.502086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.503733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.503774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.504081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.508097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.508161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.508792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.508831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.509598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.509652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.510980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.511023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.511427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.515130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.515189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.515861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.515902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.516707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.516758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.517116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.517169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.517522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.521264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.521319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.521677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.521721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.522752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.522805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.524471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.524521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.524957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.526843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.526907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.527280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.527337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.528152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.528206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.528563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.528615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.528996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.531182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.531237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.531604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.531659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.532522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.532577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.532934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.532978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.533387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.535364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.535418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.535774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.535825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.536612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.536665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.537024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.537067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.537329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.539546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.539608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.539966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.540023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.540892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.540946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.541316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.541363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.541720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.545380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.545435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.547028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.547078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.127 [2024-07-25 13:32:17.548259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.548318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.548674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.549103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.549356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.553918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.553972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.554338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.554384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.555988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.556043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.557569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.557616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.557862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.560883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.560937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.561914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.561960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.563662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.563715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.564798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.564843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.565188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.569229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.569284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.570879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.570940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.571831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.571884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.572253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.572298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.572573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.576820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.576881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.577248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.577288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.578766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.578818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.580048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.580093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.580445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.583950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.584003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.585341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.585393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.587339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.587400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.588991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.589034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.589337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.594098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.594157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.594529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.594569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.596202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.596255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.596841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.596885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.597133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.601534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.603114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.603484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.603528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.605603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.605663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.605703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.605740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.606168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.609586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.609640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.609679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.609716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.610797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.610848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.610886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.610924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.128 [2024-07-25 13:32:17.611256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.614449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.614498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.614539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.614577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.615003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.615047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.615084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.615122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.615495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.616458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.616508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.616546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.616584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.617015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.617059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.617098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.617136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.617492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.618364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.618411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.618451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.618489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.619033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.619077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.619116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.619161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.619446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.620307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.620362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.620401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.620443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.620858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.620902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.620947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.620991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.621250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.622316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.622364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.622405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.622443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.622846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.622890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.622928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.622967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.623249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.391 [2024-07-25 13:32:17.624236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.624285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.624327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.624365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.624729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.624773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.624811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.624850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.625277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.626370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.626421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.626459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.626497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.626924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.626968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.627006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.627044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.627419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.628280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.628331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.628388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.629941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.630433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.630479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.630534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.630891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.631226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.632191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.632994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.633036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.633398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.633856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.635063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.635109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.636725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.637125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.638158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.639555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.639599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.640484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.640951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.642348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.642394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.642933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.643257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.644123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.645498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.645542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.646963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.647341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.649110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.649158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.650868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.651115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.652044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.652421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.652463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.653959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.654367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.655724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.655770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.657122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.657413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.658277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.659787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.659834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.661242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.661768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.662131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.662183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.663688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.663937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.664862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.665781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.665827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.667170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.667585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.668958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.669003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.670078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.670335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.671318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.672215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.672260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.673591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.673995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.675375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.675420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.676380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.676626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.677515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.392 [2024-07-25 13:32:17.679051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.679098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.680473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.680963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.682652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.682717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.683075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.683327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.684236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.685607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.685652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.686550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.686977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.688352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.688398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.689754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.690006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.693290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.694884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.694940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.696648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.697066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.698421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.698466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.699827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.700130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.703064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.704798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.704842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.706490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.706858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.708037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.708082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.709410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.709694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.710492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.710858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.710900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.711412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.711781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.713240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.713285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.714673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.714919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.715809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.717179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.717224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.718588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.719041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.719415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.719461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.720376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.720677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.721484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.723073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.723117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.724668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.725082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.726450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.726495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.727848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.728149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.729095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.730450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.730494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.731836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.732210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.733496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.733539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.735218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.735463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.736385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.737023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.737065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.737427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.737794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.739118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.739168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.739207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.739500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.740337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.742008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.742052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.743683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.744048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.745512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.745560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.745928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.746348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.393 [2024-07-25 13:32:17.747211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.748570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.748614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.750008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.750424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.751762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.751806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.753175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.753431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.754335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.754715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.754756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.756396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.756771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.758425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.758478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.760233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.760499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.761311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.762679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.762724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.763891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.764438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.764836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.764878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.766187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.766434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.767350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.769028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.769083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.769467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.769891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.771219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.771264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.772257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.772540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.773440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.774273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.774320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.774675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.775166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.776404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.776463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.778006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.778297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.779089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.779166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.779219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.780783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.781306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.783026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.783389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.785099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.785390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.786253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.787530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.788104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.788465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.788918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.790181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.791938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.793586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.793874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.797670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.798356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.799523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.801195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.802754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.804399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.805843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.806300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.806551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.809979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.810981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.812405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.812761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.814436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.815450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.816562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.818002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.818360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.819797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.821350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.822432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.823228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.824589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.825363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.825722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.826077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.826329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.828901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.830200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.394 [2024-07-25 13:32:17.830754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.831109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.832960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.834559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.835146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.836423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.836670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.838008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.839324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.841081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.841776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.843794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.844173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.844531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.845560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.845847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.847782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.849303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.849661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.850029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.851413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.852629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.853968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.854964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.855234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.858267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.858639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.860102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.860509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.861356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.863029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.864051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.864900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.865153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.866381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.866749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.868430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.868483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.870192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.870659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.872093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.872145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.872391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.873581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.873633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.874025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.874069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.875793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.875846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.876234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.876279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.395 [2024-07-25 13:32:17.876524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.877723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.877776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.879298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.879341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.881283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.881336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.881695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.881756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.882200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.883737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.883792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.884153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.884193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.884991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.885043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.885416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.885463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.885851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.887335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.887390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.887759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.887799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.888638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.888692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.889049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.889098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.889533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.891020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.891075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.891437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.891476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.892380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.892434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.892793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.892837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.893262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.895268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.895322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.895679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.895724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.896522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.896576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.896930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.896969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.897306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.898613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.898665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.899023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.899075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.899878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.899930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.900293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.900333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.658 [2024-07-25 13:32:17.900650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.902003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.902055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.902422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.902487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.903235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.903288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.903642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.903687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.903985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.908592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.908647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.909836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.909879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.911486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.911540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.913175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.913224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.913512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.914975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.915030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.916464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.916515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.917458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.917511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.918508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.918553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.918798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.920747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.920800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.921789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.921833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.923734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.923794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.925043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.925088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.925414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.928129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.928195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.929889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.929955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.931550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.931605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.933079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.933124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.933500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.935612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.935666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.936768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.936809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.938418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.938471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.939193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.939234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.939561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.941764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.941818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.942188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.942232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.944313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.944367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.944721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.944761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.945204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.947304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.947357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.948624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.948668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.949890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.949946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.950305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.950344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.950598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.951717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.951770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.953089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.953134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.953818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.953869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.954234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.955468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.955752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.957825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.957879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.958730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.958772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.959563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.959619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.961122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.961177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.961423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.963972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.659 [2024-07-25 13:32:17.964032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.964394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.964434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.965977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.966030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.967246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.967292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.967665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.969407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.969460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.969814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.969852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.971739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.971814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.973365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.973405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.973701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.974819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.974871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.975232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.975272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.976921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.976975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.977598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.977645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.977893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.979111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.979171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.979701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.979744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.981675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.981728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.982674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.982718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.983019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.984262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.984315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.985969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.986016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.987977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.988032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.988922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.988964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.989271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.990399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.990765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.991780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.991824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.993162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.993215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.993252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.993289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.993649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.996559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.996612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.996657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.996696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.998749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.998807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.998845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.998883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.999174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:17.999971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.000019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.000056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.000093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.000566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.000611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.000649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.000691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.001058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.002003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.002078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.002117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.002163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.002603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.002647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.002686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.002725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.003031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.003891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.003948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.004007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.004045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.004423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.004467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.004505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.004541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.004796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.005850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.005904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.005942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.660 [2024-07-25 13:32:18.005982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.006359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.006404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.006445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.006482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.006759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.007554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.007602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.007648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.007686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.008054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.008100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.008145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.008183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.008480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.009277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.009332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.009372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.009410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.009879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.009923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.009960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.009998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.010248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.011080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.011126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.011171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.011209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.011570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.011613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.011651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.011698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.011994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.012789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.012836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.012874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.014486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.014909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.014959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.015001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.015369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.015630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.016428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.018046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.018091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.019244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.019614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.021051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.021095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.022580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.022826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.023880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.024751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.024796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.026129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.026540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.028162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.028206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.028954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.029205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.030059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.031680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.031724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.032084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.032516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.033771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.033815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.035160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.035451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.036240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.037941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.037984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.039632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.039998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.041649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.041695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.042050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.042477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.043319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.044680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.044724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.046323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.046772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.048124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.048174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.049530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.049775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.050645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.051010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.051050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.052712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.053117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.661 [2024-07-25 13:32:18.054478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.054523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.056160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.056522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.057319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.058967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.059013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.060332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.060851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.061239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.061283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.062899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.063150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.063991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.064667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.064711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.066015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.066388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.067999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.068042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.069103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.069556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.070589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.072194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.072245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.073905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.074279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.075229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.075275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.076608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.076896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.077685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.078056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.078097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.078511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.078877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.080482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.080529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.082128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.082377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.083227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.084614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.084658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.086262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.086674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.087036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.087077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.087794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.088043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.088836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.090449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.090493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.091768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.092171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.093531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.093575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.095180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.095498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.096614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.097950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.097993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.099361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.099723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.101017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.101061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.102668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.102980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.103809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.104497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.104541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.104894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.105299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.106323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.106371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.107571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.107818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.108651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.109016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.662 [2024-07-25 13:32:18.109056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.109417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.109789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.111080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.111126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.111620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.111864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.112773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.113137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.113187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.114161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.114574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.115967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.116014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.116052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.116301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.117189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.117567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.117608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.117961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.118333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.119855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.119904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.120633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.120889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.121774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.122146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.122189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.123382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.123832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.125448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.125492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.126341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.126589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.127496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.128228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.128272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.128625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.129029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.130441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.130490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.131950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.132245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.133035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.133408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.133450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.133801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.134224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.135437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.135483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.136165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.136420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.137277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.137642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.137683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.138038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.138415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.140188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.140237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.140943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.141197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.142090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.142465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.663 [2024-07-25 13:32:18.142508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.143624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.144071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.145069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.145113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.146612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.146899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.148025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.148078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.148115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.148506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.148872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.150667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.151207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.152542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.152788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.153706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.154460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.155695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.157046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.157506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.158741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.159867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.160239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.160698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.162772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.163904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.165155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.166118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.166987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.168469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.169699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.170358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.170608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.171777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.172155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.173781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.175163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.177196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.178605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.179065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.179427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.179702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.182187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.182560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.182916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.183529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.185290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.185897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.187170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.926 [2024-07-25 13:32:18.188805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.189168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.191699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.193452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.195078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.195749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.197684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.199066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.199428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.199784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.200150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.202634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.202999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.203359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.204669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.206394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.208026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.208903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.210649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.210898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.212409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.212775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.213240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.214583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.215281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.216735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.218382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.218738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.219077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.220323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.220695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.221053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.221424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.222192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.222560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.222922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.223290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.223689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.225038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.225413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.225772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.225824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.226601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.226969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.227330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.227384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.227710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.229016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.229069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.229448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.229494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.230269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.230333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.230687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.230738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.231094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.232427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.232481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.232842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.232886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.233671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.233726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.234083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.234127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.234576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.235925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.235980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.236349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.236395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.237201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.237253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.237609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.237656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.237923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.239779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.239833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.240191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.240230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.242185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.242245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.243850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.243891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.244177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.245291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.245344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.245697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.245737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.247347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.247399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.927 [2024-07-25 13:32:18.248134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.248190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.248447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.249666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.249718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.250145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.250190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.252217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.252270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.253118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.253166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.253443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.254605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.254657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.255945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.255990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.257201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.257261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.258966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.259015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.259265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.260623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.260684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.262353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.262408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.264413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.264476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.264839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.264882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.265248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.267511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.267565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.268896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.268941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.270408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.270472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.270826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.270865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.271201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.272578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.272632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.273904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.273950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.274633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.274686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.275040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.275080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.275332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.277402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.277456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.278423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.278467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.279210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.279264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.279654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.279696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.279941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.282432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.282493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.284150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.284207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.284964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.285016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.286128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.286178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.286507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.288293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.288347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.289487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.289533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.290435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.290487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.292175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.292216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.292497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.294482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.294535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.294980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.295023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.295994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.296045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.297090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.297135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.297385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.299887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.299946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.300311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.300356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.302086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.302144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.928 [2024-07-25 13:32:18.303173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.303217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.303531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.305342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.305417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.305776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.305820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.307771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.307830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.309574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.310347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.310620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.311840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.311893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.312808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.312850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.314495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.314558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.315618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.315658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.315904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.317735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.317790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.318151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.318195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.320014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.320072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.321766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.321817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.322102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.323215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.323268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.323623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.323667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.325674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.325730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.326827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.326866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.327203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.328303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.328368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.328721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.328764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.329454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.329508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.330902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.330951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.331284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.333503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.333556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.334180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.334230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.335349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.335403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.335759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.335803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.336095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.337367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.337422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.337776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.337815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.339607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.339660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.340773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.340815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.341169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.342303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.342668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.344283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.344334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.346258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.346316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.346358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.346396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.346641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.348801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.348852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.348890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.348927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.350110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.350164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.350203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.350241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.350611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.351417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.351464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.351502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.351540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.351943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.351986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.352026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.929 [2024-07-25 13:32:18.352071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.352330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.353157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.353205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.353243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.353283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.353687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.353730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.353768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.353806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.354074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.355026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.355074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.355113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.355158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.355526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.355568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.355612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.355650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.355893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.356763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.356810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.356848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.356903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.357278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.357322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.357362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.357406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.357650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.358545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.358593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.358631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.358676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.359136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.359188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.359234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.359284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.359706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.360552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.360609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.360648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.360693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.361057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.361105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.361148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.361186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.361432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.362270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.362318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.362358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.362402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.362768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.362819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.362862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.362900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.363149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.364055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.364102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.364146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.365845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.366217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.366270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.366316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.367900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.368153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.368991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.370351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.370395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.371747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.372119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.372495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.372542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.372994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.373246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.374069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.375449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.375493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.376440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.376842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.378196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.378240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.379598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.379846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.380881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.382545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.382595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.384296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.384676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.930 [2024-07-25 13:32:18.386116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.386166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.387218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.387500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.388280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.389796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.389839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.390215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.390771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.392463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.392507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.394152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.394396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.395296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.396648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.396692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.398047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.398420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.399749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.399817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.400179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.400603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.401432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.402981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.403034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.404674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.405081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.406426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.406470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.407836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.408116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.409008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.409391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:07.931 [2024-07-25 13:32:18.409438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.192 [2024-07-25 13:32:18.411030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.411415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.413068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.413119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.414835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.415120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.415915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.417289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.417343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.418540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.419077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.419455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.419503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.420970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.421221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.422068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.422948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.422992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.424330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.424707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.426075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.426120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.427304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.427761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.428887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.429910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.429957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.430671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.431046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.432352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.432397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.432768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.433033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.433884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.435661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.435707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.436728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.437128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.438457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.438514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.438866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.439230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.440203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.440923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.440983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.442564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.443000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.443415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.443458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.443812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.444125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.444963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.445978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.446025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.447014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.447391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.447761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.447807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.448172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.448417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.449359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.451144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.451189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.452904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.453338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.453704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.453753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.454924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.455249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.456146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.457118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.457168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.458158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.458732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.459098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.459149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.460563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.460827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.193 [2024-07-25 13:32:18.461765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.463478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.463530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.463898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.464557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.466302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.466344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.466395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.466640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.467579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.469302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.469353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.469728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.470133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.471255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.471298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.472269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.472536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.473437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.474415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.474491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.474848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.475392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.476941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.476995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.478665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.478980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.479819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.480200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.480247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.480604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.480972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.481960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.482007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.483012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.483265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.484214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.484589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.484642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.484996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.485369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.486766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.486813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.487363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.487610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.488586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.488951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.488996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.490005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.490426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.491729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.491778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.493294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.493635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.494482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.494856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.494901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.496659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.497085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.497696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.497743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.499148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.499404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.500554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.500605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.500643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.501478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.501908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.503551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.194 [2024-07-25 13:32:18.504539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.505421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.505671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.506565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.506934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.508586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.508944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.509321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.509684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.510043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.510938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.511240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.513015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.514372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.515725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.516833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.517643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.518301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.519652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.521007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.521357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.522506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.522875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.524369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.525699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.527459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.528439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.530082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.531788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.532055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.533328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.533696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.534865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.536453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.537573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.539182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.540199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.540572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.541023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.543147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.543718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.544086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.544456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.545228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.545601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.545962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.546331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.546656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.548079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.548459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.548823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.549197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.550095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.550471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.550834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.551198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.551483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.552912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.553290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.553656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.554016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.554882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.555258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.555620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.555982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.556369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.557689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.558056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.558424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.558798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.559606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.559976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.560350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.560721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.561148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.563345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.564445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.565675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.565721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.566936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.567313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.567674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.567720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.567970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.570646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.570707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.195 [2024-07-25 13:32:18.572460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.572510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.573299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.573355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.574242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.574286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.574590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.576717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.576772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.577651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.577698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.578497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.578556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.580325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.580365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.580615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.583098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.583167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.583527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.583572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.584725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.584777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.585652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.585698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.585984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.587170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.587227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.588589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.588630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.590070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.590124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.591906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.591946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.592259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.593711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.593766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.595286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.595338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.596154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.596206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.597346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.597393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.597638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.599452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.599506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.600473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.600518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.602197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.602250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.603325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.603369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.603735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.606695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.606766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.608387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.608435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.610146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.610204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.611733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.611779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.612108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.613900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.613954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.615255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.615300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.616606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.616659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.617529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.617580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.196 [2024-07-25 13:32:18.617981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.620096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.620154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.620715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.620758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.622534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.622587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.622958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.623002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.623298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.625720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.625786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.627029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.627070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.628642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.628695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.629055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.629106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.629501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.631095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.631157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.632617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.632668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.633358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.633412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.633768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.633811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.634057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.635782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.635836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.636822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.636867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.637553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.637607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.637970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.638012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.638264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.639891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.639945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.641306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.641351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.643431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.643485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.643842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.643886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.644252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.646331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.646386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.647592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.647636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.648801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.648860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.649220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.649264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.649592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.652128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.652195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.652551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.652595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.653554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.653607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.654953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.654995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.655315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.657458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.657511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.658869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.658910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.659664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.659722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.660083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.660452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.660809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.662985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.663039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.664380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.664421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.665339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.665394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.197 [2024-07-25 13:32:18.667003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.667058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.667308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.668543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.668598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.668954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.668997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.669800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.669864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.670224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.670276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.670536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.672812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.672867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.674362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.674404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.675180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.675234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.676126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.676173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.676491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.678569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.198 [2024-07-25 13:32:18.678623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.680320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.680369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.682117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.682175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.683860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.683915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.684283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.686413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.686466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.687823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.687864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.689189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.689255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.690840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.690887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.691131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.692300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.692354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.692712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.692757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.694416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.694469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.695824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.695866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.696112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.698724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.700514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.702167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.702211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.702996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.703049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.703088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.703125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.703383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.705931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.705994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.706034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.706073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.707749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.707801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.707839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.707876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.708167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.709088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.709135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.709183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.709223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.709655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.709700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.709739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.709778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.710022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.710932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.710980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.460 [2024-07-25 13:32:18.711039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.711080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.711457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.711503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.711541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.711579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.711832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.712731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.712786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.712830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.712867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.713246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.713291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.713341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.713381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.713740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.714808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.714860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.714897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.714938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.715360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.715405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.715447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.715484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.715727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.716654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.716705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.716743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.716781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.717243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.717289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.717327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.717364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.717607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.718706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.718753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.718795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.718832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.719206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.719250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.719288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.719325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.719594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.720482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.720529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.720576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.720615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.720991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.721034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.721072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.721109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.721389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.722301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.722348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.722398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.722757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.723182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.723230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.723272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.724797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.725081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.725983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.726616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.726662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.727988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.728362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.729973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.730023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.731159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.731593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.732570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.734149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.734201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.735844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.736220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.737153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.737198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.738536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.738826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.739667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.740040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.740092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.461 [2024-07-25 13:32:18.740455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.740824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.742488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.742544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.744146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.744392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.745307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.746673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.746718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.748306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.748742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.749107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.749161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.749517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.749777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.750750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.752335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.752390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.753989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.754454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.754819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.754867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.755852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.756134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.757046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.758274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.758321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.759077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.759583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.759954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.760002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.761722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.761968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.762917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.764620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.764673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.765031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.765458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.766185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.766232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.767456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.767702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.768852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.769244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.769294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.769650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.770065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.771416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.771460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.772262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.772544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.773475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.773846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.773893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.774331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.774700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.776317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.776370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.777169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.777438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.778417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.778784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.778831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.780040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.780506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.781662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.781709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.783290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.783620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.784639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.785011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.785057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.786594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.787001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.787513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.787559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.788701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.788946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.789946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.790919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.790964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.791936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.792309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.793657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.793701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.794705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.794991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.796148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.797922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.797972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.462 [2024-07-25 13:32:18.799406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.799857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.801260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.801307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.803009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.803379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.804294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.805303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.805350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.806573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.806941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.807913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.807960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.808787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.809251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.810280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.811485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.811535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.812005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.812378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.813959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.814011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.814383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.814652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.815521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.817029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.817076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.818341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.818805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.819898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.819952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.820003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.820434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.821677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.823106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.823158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.823761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.824126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.825882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.825940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.826302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.826614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.827473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.828931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.828974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.830368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.830835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.831900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.831958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.832357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.832741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.833696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.835028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.835075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.835438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.835907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.836282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.836336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.837614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.837897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:08.463 [2024-07-25 13:32:18.838795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:32:09.032 00:32:09.032 Latency(us) 00:32:09.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.032 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:09.032 Verification LBA range: start 0x0 length 0x100 00:32:09.032 crypto_ram : 5.81 44.10 2.76 0.00 0.00 2819178.50 62075.70 2603823.92 00:32:09.032 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:09.032 Verification LBA range: start 0x100 length 0x100 00:32:09.033 crypto_ram : 5.79 44.23 2.76 0.00 0.00 2803349.91 68367.16 2576980.38 00:32:09.033 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:09.033 Verification LBA range: start 0x0 length 0x100 00:32:09.033 crypto_ram2 : 5.81 44.09 2.76 0.00 0.00 2713169.10 61656.27 2603823.92 00:32:09.033 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:09.033 Verification LBA range: start 0x100 length 0x100 00:32:09.033 crypto_ram2 : 5.79 44.22 2.76 0.00 0.00 2699616.26 67947.72 2509871.51 00:32:09.033 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:09.033 Verification LBA range: start 0x0 length 0x100 00:32:09.033 crypto_ram3 : 5.58 281.39 17.59 0.00 0.00 405597.57 1966.08 583847.12 00:32:09.033 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:09.033 Verification LBA range: start 0x100 length 0x100 00:32:09.033 crypto_ram3 : 5.59 292.95 18.31 0.00 0.00 390105.76 52219.08 583847.12 00:32:09.033 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:09.033 Verification LBA range: start 0x0 length 0x100 00:32:09.033 crypto_ram4 : 5.68 297.33 18.58 0.00 0.00 372436.62 10538.19 493250.15 00:32:09.033 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:09.033 Verification LBA range: start 0x100 length 0x100 00:32:09.033 crypto_ram4 : 5.69 309.30 19.33 0.00 0.00 358613.41 16252.93 499961.04 00:32:09.033 =================================================================================================================== 00:32:09.033 Total : 1357.62 84.85 0.00 0.00 697913.01 1966.08 2603823.92 00:32:09.292 00:32:09.293 real 0m8.860s 00:32:09.293 user 0m16.879s 00:32:09.293 sys 0m0.388s 00:32:09.293 13:32:19 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:09.293 13:32:19 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:32:09.293 ************************************ 00:32:09.293 END TEST bdev_verify_big_io 00:32:09.293 ************************************ 00:32:09.293 13:32:19 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:09.293 13:32:19 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:32:09.293 13:32:19 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:09.293 13:32:19 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:09.293 ************************************ 00:32:09.293 START TEST bdev_write_zeroes 00:32:09.293 ************************************ 00:32:09.293 13:32:19 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:09.293 [2024-07-25 13:32:19.692222] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:32:09.293 [2024-07-25 13:32:19.692274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057777 ] 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:09.293 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.293 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:09.552 [2024-07-25 13:32:19.822703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.552 [2024-07-25 13:32:19.907119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.552 [2024-07-25 13:32:19.928364] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:32:09.552 [2024-07-25 13:32:19.936385] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:09.552 [2024-07-25 13:32:19.944404] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:09.811 [2024-07-25 13:32:20.047590] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:32:12.346 [2024-07-25 13:32:22.224417] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:32:12.346 [2024-07-25 13:32:22.224474] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:12.346 [2024-07-25 13:32:22.224488] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:12.346 [2024-07-25 13:32:22.232436] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:32:12.346 [2024-07-25 13:32:22.232454] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:12.346 [2024-07-25 13:32:22.232465] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:12.346 [2024-07-25 13:32:22.240457] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:32:12.346 [2024-07-25 13:32:22.240473] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:12.346 [2024-07-25 13:32:22.240483] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:12.346 [2024-07-25 13:32:22.248477] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:32:12.346 [2024-07-25 13:32:22.248493] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:12.346 [2024-07-25 13:32:22.248503] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:12.346 Running I/O for 1 seconds... 00:32:12.915 00:32:12.915 Latency(us) 00:32:12.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.915 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:12.915 crypto_ram : 1.02 2124.61 8.30 0.00 0.00 59803.72 5006.95 71722.60 00:32:12.915 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:12.915 crypto_ram2 : 1.02 2130.36 8.32 0.00 0.00 59338.58 4980.74 66689.43 00:32:12.915 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:12.915 crypto_ram3 : 1.02 16381.03 63.99 0.00 0.00 7704.49 2280.65 9961.47 00:32:12.915 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:12.915 crypto_ram4 : 1.02 16365.79 63.93 0.00 0.00 7680.51 2280.65 8074.04 00:32:12.915 =================================================================================================================== 00:32:12.915 Total : 37001.79 144.54 0.00 0.00 13681.20 2280.65 71722.60 00:32:13.483 00:32:13.483 real 0m4.057s 00:32:13.483 user 0m3.690s 00:32:13.483 sys 0m0.325s 00:32:13.483 13:32:23 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:13.483 13:32:23 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:32:13.483 ************************************ 00:32:13.483 END TEST bdev_write_zeroes 00:32:13.483 ************************************ 00:32:13.483 13:32:23 blockdev_crypto_aesni -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:13.483 13:32:23 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:32:13.483 13:32:23 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:13.483 13:32:23 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:13.483 ************************************ 00:32:13.483 START TEST bdev_json_nonenclosed 00:32:13.483 ************************************ 00:32:13.483 13:32:23 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:13.483 [2024-07-25 13:32:23.822714] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:32:13.483 [2024-07-25 13:32:23.822769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058333 ] 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:13.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:13.483 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:13.483 [2024-07-25 13:32:23.953744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.742 [2024-07-25 13:32:24.038009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.742 [2024-07-25 13:32:24.038074] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:13.742 [2024-07-25 13:32:24.038090] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:13.742 [2024-07-25 13:32:24.038101] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:13.742 00:32:13.742 real 0m0.357s 00:32:13.742 user 0m0.205s 00:32:13.742 sys 0m0.150s 00:32:13.742 13:32:24 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:13.742 13:32:24 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:32:13.742 ************************************ 00:32:13.742 END TEST bdev_json_nonenclosed 00:32:13.742 ************************************ 00:32:13.742 13:32:24 blockdev_crypto_aesni -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:13.742 13:32:24 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:32:13.742 13:32:24 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:13.742 13:32:24 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:13.742 ************************************ 00:32:13.742 START TEST bdev_json_nonarray 00:32:13.742 ************************************ 00:32:13.742 13:32:24 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:14.001 [2024-07-25 13:32:24.257663] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:32:14.001 [2024-07-25 13:32:24.257716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058529 ] 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:14.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.001 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:14.001 [2024-07-25 13:32:24.388795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.001 [2024-07-25 13:32:24.472300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.001 [2024-07-25 13:32:24.472371] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:14.001 [2024-07-25 13:32:24.472387] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:14.001 [2024-07-25 13:32:24.472401] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:14.260 00:32:14.260 real 0m0.357s 00:32:14.260 user 0m0.201s 00:32:14.260 sys 0m0.154s 00:32:14.260 13:32:24 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:14.260 13:32:24 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:32:14.260 ************************************ 00:32:14.260 END TEST bdev_json_nonarray 00:32:14.260 ************************************ 00:32:14.260 13:32:24 blockdev_crypto_aesni -- bdev/blockdev.sh@786 -- # [[ crypto_aesni == bdev ]] 00:32:14.260 13:32:24 blockdev_crypto_aesni -- bdev/blockdev.sh@793 -- # [[ crypto_aesni == gpt ]] 00:32:14.260 13:32:24 blockdev_crypto_aesni -- bdev/blockdev.sh@797 -- # [[ crypto_aesni == crypto_sw ]] 00:32:14.260 13:32:24 blockdev_crypto_aesni -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:32:14.260 13:32:24 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # cleanup 00:32:14.260 13:32:24 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:32:14.260 13:32:24 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:14.260 13:32:24 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:32:14.260 13:32:24 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:32:14.260 13:32:24 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:32:14.260 13:32:24 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:32:14.260 00:32:14.260 real 1m10.286s 00:32:14.260 user 2m54.184s 00:32:14.260 sys 0m8.511s 00:32:14.260 13:32:24 blockdev_crypto_aesni -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:14.260 13:32:24 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:32:14.260 ************************************ 00:32:14.260 END TEST blockdev_crypto_aesni 00:32:14.260 ************************************ 00:32:14.260 13:32:24 -- spdk/autotest.sh@362 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:32:14.260 13:32:24 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:14.260 13:32:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:14.260 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:32:14.260 ************************************ 00:32:14.260 START TEST blockdev_crypto_sw 00:32:14.260 ************************************ 00:32:14.260 13:32:24 blockdev_crypto_sw -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:32:14.519 * Looking for test storage... 00:32:14.519 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # uname -s 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@681 -- # test_type=crypto_sw 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # crypto_device= 00:32:14.519 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # dek= 00:32:14.520 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # env_ctx= 00:32:14.520 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:32:14.520 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:32:14.520 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == bdev ]] 00:32:14.520 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == crypto_* ]] 00:32:14.520 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:32:14.520 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:32:14.520 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1058669 00:32:14.520 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:14.520 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 1058669 00:32:14.520 13:32:24 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:32:14.520 13:32:24 blockdev_crypto_sw -- common/autotest_common.sh@831 -- # '[' -z 1058669 ']' 00:32:14.520 13:32:24 blockdev_crypto_sw -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.520 13:32:24 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:14.520 13:32:24 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.520 13:32:24 blockdev_crypto_sw -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:14.520 13:32:24 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:14.520 [2024-07-25 13:32:24.865347] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:32:14.520 [2024-07-25 13:32:24.865397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058669 ] 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:14.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:14.520 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:14.520 [2024-07-25 13:32:24.985277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.780 [2024-07-25 13:32:25.069535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.348 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:15.348 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@864 -- # return 0 00:32:15.348 13:32:25 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:32:15.348 13:32:25 blockdev_crypto_sw -- bdev/blockdev.sh@710 -- # setup_crypto_sw_conf 00:32:15.348 13:32:25 blockdev_crypto_sw -- bdev/blockdev.sh@192 -- # rpc_cmd 00:32:15.348 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.348 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:15.625 Malloc0 00:32:15.625 Malloc1 00:32:15.625 true 00:32:15.625 true 00:32:15.625 true 00:32:15.625 [2024-07-25 13:32:25.928055] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:15.625 crypto_ram 00:32:15.625 [2024-07-25 13:32:25.936085] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:15.625 crypto_ram2 00:32:15.625 [2024-07-25 13:32:25.944104] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:15.625 crypto_ram3 00:32:15.625 [ 00:32:15.625 { 00:32:15.625 "name": "Malloc1", 00:32:15.625 "aliases": [ 00:32:15.625 "0c7e9d85-b50d-4a83-9aab-c24ca4a6f7ec" 00:32:15.625 ], 00:32:15.625 "product_name": "Malloc disk", 00:32:15.625 "block_size": 4096, 00:32:15.625 "num_blocks": 4096, 00:32:15.625 "uuid": "0c7e9d85-b50d-4a83-9aab-c24ca4a6f7ec", 00:32:15.625 "assigned_rate_limits": { 00:32:15.625 "rw_ios_per_sec": 0, 00:32:15.625 "rw_mbytes_per_sec": 0, 00:32:15.625 "r_mbytes_per_sec": 0, 00:32:15.625 "w_mbytes_per_sec": 0 00:32:15.625 }, 00:32:15.625 "claimed": true, 00:32:15.625 "claim_type": "exclusive_write", 00:32:15.625 "zoned": false, 00:32:15.625 "supported_io_types": { 00:32:15.625 "read": true, 00:32:15.625 "write": true, 00:32:15.625 "unmap": true, 00:32:15.625 "flush": true, 00:32:15.625 "reset": true, 00:32:15.625 "nvme_admin": false, 00:32:15.625 "nvme_io": false, 00:32:15.625 "nvme_io_md": false, 00:32:15.625 "write_zeroes": true, 00:32:15.625 "zcopy": true, 00:32:15.625 "get_zone_info": false, 00:32:15.625 "zone_management": false, 00:32:15.625 "zone_append": false, 00:32:15.625 "compare": false, 00:32:15.625 "compare_and_write": false, 00:32:15.625 "abort": true, 00:32:15.625 "seek_hole": false, 00:32:15.625 "seek_data": false, 00:32:15.625 "copy": true, 00:32:15.625 "nvme_iov_md": false 00:32:15.625 }, 00:32:15.625 "memory_domains": [ 00:32:15.625 { 00:32:15.625 "dma_device_id": "system", 00:32:15.625 "dma_device_type": 1 00:32:15.625 }, 00:32:15.625 { 00:32:15.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:15.625 "dma_device_type": 2 00:32:15.625 } 00:32:15.625 ], 00:32:15.625 "driver_specific": {} 00:32:15.625 } 00:32:15.625 ] 00:32:15.625 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.625 13:32:25 blockdev_crypto_sw -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:32:15.625 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.625 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:15.625 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.625 13:32:25 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # cat 00:32:15.625 13:32:25 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:32:15.625 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.625 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:15.625 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.625 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:32:15.625 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.625 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:15.625 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.625 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:32:15.625 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.625 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:15.625 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.625 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:32:15.625 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:32:15.625 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:32:15.625 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.625 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:15.900 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.900 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:32:15.900 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "3c5fc3dd-d5b2-5d0e-b916-aa3a69b8932d"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3c5fc3dd-d5b2-5d0e-b916-aa3a69b8932d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "b1ff8df5-b228-50f4-ab6f-8ce6b97cb6ff"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "b1ff8df5-b228-50f4-ab6f-8ce6b97cb6ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:32:15.900 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r .name 00:32:15.900 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:32:15.900 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:32:15.900 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:32:15.900 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # killprocess 1058669 00:32:15.900 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@950 -- # '[' -z 1058669 ']' 00:32:15.900 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # kill -0 1058669 00:32:15.900 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # uname 00:32:15.900 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:15.900 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1058669 00:32:15.900 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:15.900 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:15.900 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1058669' 00:32:15.900 killing process with pid 1058669 00:32:15.900 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@969 -- # kill 1058669 00:32:15.900 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@974 -- # wait 1058669 00:32:16.159 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:16.159 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:32:16.159 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:16.159 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:16.159 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:16.159 ************************************ 00:32:16.159 START TEST bdev_hello_world 00:32:16.159 ************************************ 00:32:16.159 13:32:26 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:32:16.418 [2024-07-25 13:32:26.649879] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:32:16.418 [2024-07-25 13:32:26.649925] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058962 ] 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.418 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:16.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:16.419 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:16.419 [2024-07-25 13:32:26.766740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.419 [2024-07-25 13:32:26.847834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.678 [2024-07-25 13:32:27.015697] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:16.678 [2024-07-25 13:32:27.015755] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:16.678 [2024-07-25 13:32:27.015768] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:16.678 [2024-07-25 13:32:27.023715] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:16.678 [2024-07-25 13:32:27.023732] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:16.678 [2024-07-25 13:32:27.023743] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:16.678 [2024-07-25 13:32:27.031736] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:16.678 [2024-07-25 13:32:27.031753] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:16.678 [2024-07-25 13:32:27.031763] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:16.678 [2024-07-25 13:32:27.071468] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:32:16.678 [2024-07-25 13:32:27.071500] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:32:16.678 [2024-07-25 13:32:27.071516] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:32:16.678 [2024-07-25 13:32:27.072746] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:32:16.678 [2024-07-25 13:32:27.072822] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:32:16.678 [2024-07-25 13:32:27.072841] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:32:16.678 [2024-07-25 13:32:27.072872] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:32:16.678 00:32:16.678 [2024-07-25 13:32:27.072888] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:32:16.937 00:32:16.937 real 0m0.659s 00:32:16.937 user 0m0.435s 00:32:16.937 sys 0m0.209s 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:32:16.937 ************************************ 00:32:16.937 END TEST bdev_hello_world 00:32:16.937 ************************************ 00:32:16.937 13:32:27 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:32:16.937 13:32:27 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:16.937 13:32:27 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:16.937 13:32:27 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:16.937 ************************************ 00:32:16.937 START TEST bdev_bounds 00:32:16.937 ************************************ 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=1059110 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 1059110' 00:32:16.937 Process bdevio pid: 1059110 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 1059110 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 1059110 ']' 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:16.937 13:32:27 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:16.937 [2024-07-25 13:32:27.403444] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:32:16.937 [2024-07-25 13:32:27.403504] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059110 ] 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:17.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.196 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:17.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.197 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:17.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.197 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:17.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.197 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:17.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.197 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:17.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.197 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:17.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.197 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:17.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.197 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:17.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.197 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:17.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:17.197 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:17.197 [2024-07-25 13:32:27.536378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:17.197 [2024-07-25 13:32:27.621596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.197 [2024-07-25 13:32:27.621689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:17.197 [2024-07-25 13:32:27.621693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.456 [2024-07-25 13:32:27.780527] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:17.456 [2024-07-25 13:32:27.780587] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:17.456 [2024-07-25 13:32:27.780601] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:17.456 [2024-07-25 13:32:27.788551] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:17.456 [2024-07-25 13:32:27.788568] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:17.456 [2024-07-25 13:32:27.788579] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:17.456 [2024-07-25 13:32:27.796575] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:17.456 [2024-07-25 13:32:27.796596] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:17.456 [2024-07-25 13:32:27.796606] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:32:18.025 I/O targets: 00:32:18.025 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:32:18.025 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:32:18.025 00:32:18.025 00:32:18.025 CUnit - A unit testing framework for C - Version 2.1-3 00:32:18.025 http://cunit.sourceforge.net/ 00:32:18.025 00:32:18.025 00:32:18.025 Suite: bdevio tests on: crypto_ram3 00:32:18.025 Test: blockdev write read block ...passed 00:32:18.025 Test: blockdev write zeroes read block ...passed 00:32:18.025 Test: blockdev write zeroes read no split ...passed 00:32:18.025 Test: blockdev write zeroes read split ...passed 00:32:18.025 Test: blockdev write zeroes read split partial ...passed 00:32:18.025 Test: blockdev reset ...passed 00:32:18.025 Test: blockdev write read 8 blocks ...passed 00:32:18.025 Test: blockdev write read size > 128k ...passed 00:32:18.025 Test: blockdev write read invalid size ...passed 00:32:18.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:18.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:18.025 Test: blockdev write read max offset ...passed 00:32:18.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:18.025 Test: blockdev writev readv 8 blocks ...passed 00:32:18.025 Test: blockdev writev readv 30 x 1block ...passed 00:32:18.025 Test: blockdev writev readv block ...passed 00:32:18.025 Test: blockdev writev readv size > 128k ...passed 00:32:18.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:18.025 Test: blockdev comparev and writev ...passed 00:32:18.025 Test: blockdev nvme passthru rw ...passed 00:32:18.025 Test: blockdev nvme passthru vendor specific ...passed 00:32:18.025 Test: blockdev nvme admin passthru ...passed 00:32:18.025 Test: blockdev copy ...passed 00:32:18.025 Suite: bdevio tests on: crypto_ram 00:32:18.025 Test: blockdev write read block ...passed 00:32:18.025 Test: blockdev write zeroes read block ...passed 00:32:18.025 Test: blockdev write zeroes read no split ...passed 00:32:18.025 Test: blockdev write zeroes read split ...passed 00:32:18.025 Test: blockdev write zeroes read split partial ...passed 00:32:18.025 Test: blockdev reset ...passed 00:32:18.025 Test: blockdev write read 8 blocks ...passed 00:32:18.025 Test: blockdev write read size > 128k ...passed 00:32:18.025 Test: blockdev write read invalid size ...passed 00:32:18.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:18.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:18.025 Test: blockdev write read max offset ...passed 00:32:18.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:18.025 Test: blockdev writev readv 8 blocks ...passed 00:32:18.025 Test: blockdev writev readv 30 x 1block ...passed 00:32:18.025 Test: blockdev writev readv block ...passed 00:32:18.025 Test: blockdev writev readv size > 128k ...passed 00:32:18.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:18.025 Test: blockdev comparev and writev ...passed 00:32:18.025 Test: blockdev nvme passthru rw ...passed 00:32:18.025 Test: blockdev nvme passthru vendor specific ...passed 00:32:18.025 Test: blockdev nvme admin passthru ...passed 00:32:18.025 Test: blockdev copy ...passed 00:32:18.025 00:32:18.025 Run Summary: Type Total Ran Passed Failed Inactive 00:32:18.025 suites 2 2 n/a 0 0 00:32:18.025 tests 46 46 46 0 0 00:32:18.025 asserts 260 260 260 0 n/a 00:32:18.025 00:32:18.025 Elapsed time = 0.077 seconds 00:32:18.025 0 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 1059110 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 1059110 ']' 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 1059110 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1059110 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1059110' 00:32:18.025 killing process with pid 1059110 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@969 -- # kill 1059110 00:32:18.025 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@974 -- # wait 1059110 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:32:18.285 00:32:18.285 real 0m1.264s 00:32:18.285 user 0m3.165s 00:32:18.285 sys 0m0.362s 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:18.285 ************************************ 00:32:18.285 END TEST bdev_bounds 00:32:18.285 ************************************ 00:32:18.285 13:32:28 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:32:18.285 13:32:28 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:18.285 13:32:28 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:18.285 13:32:28 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:18.285 ************************************ 00:32:18.285 START TEST bdev_nbd 00:32:18.285 ************************************ 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=1059291 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 1059291 /var/tmp/spdk-nbd.sock 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 1059291 ']' 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:18.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:18.285 13:32:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:18.285 [2024-07-25 13:32:28.766593] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:32:18.285 [2024-07-25 13:32:28.766655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:18.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:18.545 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:18.545 [2024-07-25 13:32:28.900598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.545 [2024-07-25 13:32:28.985319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.806 [2024-07-25 13:32:29.161771] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:18.806 [2024-07-25 13:32:29.161835] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:18.806 [2024-07-25 13:32:29.161848] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:18.806 [2024-07-25 13:32:29.169790] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:18.806 [2024-07-25 13:32:29.169807] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:18.806 [2024-07-25 13:32:29.169818] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:18.806 [2024-07-25 13:32:29.177811] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:18.806 [2024-07-25 13:32:29.177827] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:18.806 [2024-07-25 13:32:29.177838] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:19.375 1+0 records in 00:32:19.375 1+0 records out 00:32:19.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025218 s, 16.2 MB/s 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:32:19.375 13:32:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:19.634 1+0 records in 00:32:19.634 1+0 records out 00:32:19.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067379 s, 6.1 MB/s 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:32:19.634 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:19.892 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:32:19.892 { 00:32:19.892 "nbd_device": "/dev/nbd0", 00:32:19.892 "bdev_name": "crypto_ram" 00:32:19.892 }, 00:32:19.892 { 00:32:19.892 "nbd_device": "/dev/nbd1", 00:32:19.892 "bdev_name": "crypto_ram3" 00:32:19.892 } 00:32:19.892 ]' 00:32:19.892 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:32:19.892 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:32:19.892 { 00:32:19.892 "nbd_device": "/dev/nbd0", 00:32:19.892 "bdev_name": "crypto_ram" 00:32:19.892 }, 00:32:19.892 { 00:32:19.892 "nbd_device": "/dev/nbd1", 00:32:19.892 "bdev_name": "crypto_ram3" 00:32:19.892 } 00:32:19.892 ]' 00:32:19.892 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:32:19.892 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:32:19.892 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:19.892 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:19.892 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:19.892 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:19.892 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:19.892 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:20.149 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:20.149 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:20.149 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:20.149 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:20.149 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:20.149 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:20.149 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:20.149 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:20.149 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:20.149 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:20.407 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:20.407 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:20.407 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:20.407 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:20.407 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:20.407 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:20.407 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:20.407 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:20.407 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:20.407 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:20.407 13:32:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:20.665 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:20.666 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:20.666 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:20.666 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:32:20.666 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:20.666 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:20.666 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:32:20.923 /dev/nbd0 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:20.923 1+0 records in 00:32:20.923 1+0 records out 00:32:20.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025542 s, 16.0 MB/s 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:20.923 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:32:21.181 /dev/nbd1 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:21.181 1+0 records in 00:32:21.181 1+0 records out 00:32:21.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328383 s, 12.5 MB/s 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:21.181 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:21.439 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:21.439 { 00:32:21.439 "nbd_device": "/dev/nbd0", 00:32:21.439 "bdev_name": "crypto_ram" 00:32:21.439 }, 00:32:21.439 { 00:32:21.439 "nbd_device": "/dev/nbd1", 00:32:21.439 "bdev_name": "crypto_ram3" 00:32:21.439 } 00:32:21.439 ]' 00:32:21.439 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:21.439 { 00:32:21.439 "nbd_device": "/dev/nbd0", 00:32:21.439 "bdev_name": "crypto_ram" 00:32:21.439 }, 00:32:21.439 { 00:32:21.439 "nbd_device": "/dev/nbd1", 00:32:21.439 "bdev_name": "crypto_ram3" 00:32:21.439 } 00:32:21.439 ]' 00:32:21.439 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:21.697 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:32:21.697 /dev/nbd1' 00:32:21.697 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:32:21.697 /dev/nbd1' 00:32:21.697 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:21.697 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:32:21.697 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:32:21.697 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:32:21.697 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:32:21.697 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:32:21.697 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:21.697 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:21.697 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:21.698 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:21.698 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:21.698 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:32:21.698 256+0 records in 00:32:21.698 256+0 records out 00:32:21.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104197 s, 101 MB/s 00:32:21.698 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:21.698 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:21.698 256+0 records in 00:32:21.698 256+0 records out 00:32:21.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280187 s, 37.4 MB/s 00:32:21.698 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:21.698 13:32:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:32:21.698 256+0 records in 00:32:21.698 256+0 records out 00:32:21.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329919 s, 31.8 MB/s 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:21.698 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:21.957 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:21.957 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:21.957 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:21.957 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:21.957 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:21.957 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:21.957 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:21.957 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:21.957 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:21.957 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:22.215 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:22.215 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:22.215 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:22.215 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:22.215 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:22.215 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:22.215 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:22.215 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:22.215 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:22.215 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:22.215 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:32:22.473 13:32:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:32:22.731 malloc_lvol_verify 00:32:22.731 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:32:22.992 078ae059-38ee-4f90-9226-e0cd998e974b 00:32:22.992 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:32:22.992 d1ebc384-1623-4911-a4bc-a3af81e126ad 00:32:23.252 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:32:23.252 /dev/nbd0 00:32:23.252 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:32:23.252 mke2fs 1.46.5 (30-Dec-2021) 00:32:23.252 Discarding device blocks: 0/4096 done 00:32:23.252 Creating filesystem with 4096 1k blocks and 1024 inodes 00:32:23.252 00:32:23.252 Allocating group tables: 0/1 done 00:32:23.252 Writing inode tables: 0/1 done 00:32:23.252 Creating journal (1024 blocks): done 00:32:23.252 Writing superblocks and filesystem accounting information: 0/1 done 00:32:23.252 00:32:23.252 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:23.510 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:23.768 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:32:23.768 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:32:23.768 13:32:33 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 1059291 00:32:23.768 13:32:33 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 1059291 ']' 00:32:23.768 13:32:33 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 1059291 00:32:23.768 13:32:33 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:32:23.768 13:32:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:23.768 13:32:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1059291 00:32:23.768 13:32:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:23.768 13:32:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:23.768 13:32:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1059291' 00:32:23.768 killing process with pid 1059291 00:32:23.769 13:32:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@969 -- # kill 1059291 00:32:23.769 13:32:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@974 -- # wait 1059291 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:32:24.027 00:32:24.027 real 0m5.563s 00:32:24.027 user 0m7.824s 00:32:24.027 sys 0m2.290s 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:24.027 ************************************ 00:32:24.027 END TEST bdev_nbd 00:32:24.027 ************************************ 00:32:24.027 13:32:34 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:32:24.027 13:32:34 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = nvme ']' 00:32:24.027 13:32:34 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = gpt ']' 00:32:24.027 13:32:34 blockdev_crypto_sw -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:32:24.027 13:32:34 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:24.027 13:32:34 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:24.027 13:32:34 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:24.027 ************************************ 00:32:24.027 START TEST bdev_fio 00:32:24.027 ************************************ 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:32:24.027 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:24.027 13:32:34 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:24.027 ************************************ 00:32:24.027 START TEST bdev_fio_rw_verify 00:32:24.027 ************************************ 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:24.028 13:32:34 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:24.618 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:24.618 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:24.618 fio-3.35 00:32:24.618 Starting 2 threads 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:24.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:24.618 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:36.797 00:32:36.797 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=1060626: Thu Jul 25 13:32:45 2024 00:32:36.797 read: IOPS=23.4k, BW=91.4MiB/s (95.9MB/s)(914MiB/10000msec) 00:32:36.797 slat (usec): min=13, max=144, avg=18.97, stdev= 3.54 00:32:36.797 clat (usec): min=7, max=351, avg=136.15, stdev=54.32 00:32:36.797 lat (usec): min=25, max=399, avg=155.11, stdev=55.67 00:32:36.797 clat percentiles (usec): 00:32:36.797 | 50.000th=[ 133], 99.000th=[ 260], 99.900th=[ 281], 99.990th=[ 314], 00:32:36.797 | 99.999th=[ 347] 00:32:36.797 write: IOPS=28.1k, BW=110MiB/s (115MB/s)(1040MiB/9483msec); 0 zone resets 00:32:36.797 slat (usec): min=13, max=1528, avg=31.53, stdev= 5.09 00:32:36.797 clat (usec): min=14, max=1790, avg=181.99, stdev=83.14 00:32:36.797 lat (usec): min=42, max=1814, avg=213.52, stdev=84.70 00:32:36.797 clat percentiles (usec): 00:32:36.797 | 50.000th=[ 178], 99.000th=[ 359], 99.900th=[ 379], 99.990th=[ 644], 00:32:36.797 | 99.999th=[ 873] 00:32:36.797 bw ( KiB/s): min=99240, max=113376, per=94.97%, avg=106708.21, stdev=2191.11, samples=38 00:32:36.797 iops : min=24810, max=28344, avg=26677.16, stdev=547.82, samples=38 00:32:36.797 lat (usec) : 10=0.01%, 20=0.01%, 50=5.32%, 100=18.23%, 250=63.62% 00:32:36.797 lat (usec) : 500=12.80%, 750=0.01%, 1000=0.01% 00:32:36.797 lat (msec) : 2=0.01% 00:32:36.797 cpu : usr=99.61%, sys=0.01%, ctx=27, majf=0, minf=475 00:32:36.797 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.797 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.797 issued rwts: total=234078,266365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.797 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:36.797 00:32:36.797 Run status group 0 (all jobs): 00:32:36.797 READ: bw=91.4MiB/s (95.9MB/s), 91.4MiB/s-91.4MiB/s (95.9MB/s-95.9MB/s), io=914MiB (959MB), run=10000-10000msec 00:32:36.797 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1040MiB (1091MB), run=9483-9483msec 00:32:36.797 00:32:36.797 real 0m11.131s 00:32:36.797 user 0m32.083s 00:32:36.797 sys 0m0.355s 00:32:36.797 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:36.797 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:32:36.797 ************************************ 00:32:36.797 END TEST bdev_fio_rw_verify 00:32:36.797 ************************************ 00:32:36.797 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:32:36.797 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:36.797 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:32:36.797 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:36.797 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:32:36.797 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:32:36.797 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:36.797 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:36.797 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "3c5fc3dd-d5b2-5d0e-b916-aa3a69b8932d"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3c5fc3dd-d5b2-5d0e-b916-aa3a69b8932d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "b1ff8df5-b228-50f4-ab6f-8ce6b97cb6ff"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "b1ff8df5-b228-50f4-ab6f-8ce6b97cb6ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:32:36.798 crypto_ram3 ]] 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "3c5fc3dd-d5b2-5d0e-b916-aa3a69b8932d"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3c5fc3dd-d5b2-5d0e-b916-aa3a69b8932d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "b1ff8df5-b228-50f4-ab6f-8ce6b97cb6ff"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "b1ff8df5-b228-50f4-ab6f-8ce6b97cb6ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:36.798 ************************************ 00:32:36.798 START TEST bdev_fio_trim 00:32:36.798 ************************************ 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:36.798 13:32:45 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:36.798 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:36.798 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:36.798 fio-3.35 00:32:36.798 Starting 2 threads 00:32:36.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.798 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:36.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.798 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:36.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.798 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:36.798 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.798 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:36.799 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:36.799 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:46.785 00:32:46.785 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=1062615: Thu Jul 25 13:32:56 2024 00:32:46.785 write: IOPS=26.9k, BW=105MiB/s (110MB/s)(1052MiB/10001msec); 0 zone resets 00:32:46.785 slat (usec): min=17, max=2012, avg=32.60, stdev=10.32 00:32:46.785 clat (usec): min=87, max=2333, avg=244.75, stdev=58.63 00:32:46.785 lat (usec): min=120, max=2375, avg=277.35, stdev=56.27 00:32:46.785 clat percentiles (usec): 00:32:46.785 | 50.000th=[ 253], 99.000th=[ 338], 99.900th=[ 375], 99.990th=[ 562], 00:32:46.785 | 99.999th=[ 2245] 00:32:46.785 bw ( KiB/s): min=104888, max=109960, per=100.00%, avg=107764.63, stdev=513.09, samples=38 00:32:46.785 iops : min=26222, max=27490, avg=26941.16, stdev=128.27, samples=38 00:32:46.785 trim: IOPS=26.9k, BW=105MiB/s (110MB/s)(1052MiB/10001msec); 0 zone resets 00:32:46.785 slat (nsec): min=7014, max=94858, avg=14567.49, stdev=5068.79 00:32:46.785 clat (usec): min=41, max=1083, avg=163.37, stdev=91.83 00:32:46.785 lat (usec): min=51, max=1104, avg=177.93, stdev=94.98 00:32:46.785 clat percentiles (usec): 00:32:46.785 | 50.000th=[ 135], 99.000th=[ 371], 99.900th=[ 420], 99.990th=[ 457], 00:32:46.785 | 99.999th=[ 725] 00:32:46.785 bw ( KiB/s): min=104920, max=109968, per=100.00%, avg=107765.89, stdev=512.85, samples=38 00:32:46.785 iops : min=26230, max=27492, avg=26941.47, stdev=128.21, samples=38 00:32:46.785 lat (usec) : 50=0.98%, 100=14.11%, 250=48.34%, 500=36.55%, 750=0.01% 00:32:46.785 lat (usec) : 1000=0.01% 00:32:46.785 lat (msec) : 2=0.01%, 4=0.01% 00:32:46.785 cpu : usr=99.47%, sys=0.01%, ctx=29, majf=0, minf=252 00:32:46.785 IO depths : 1=5.0%, 2=13.7%, 4=65.1%, 8=16.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:46.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.785 complete : 0=0.0%, 4=86.0%, 8=14.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.785 issued rwts: total=0,269220,269221,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.785 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:46.785 00:32:46.785 Run status group 0 (all jobs): 00:32:46.785 WRITE: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=1052MiB (1103MB), run=10001-10001msec 00:32:46.786 TRIM: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=1052MiB (1103MB), run=10001-10001msec 00:32:46.786 00:32:46.786 real 0m11.169s 00:32:46.786 user 0m32.741s 00:32:46.786 sys 0m0.373s 00:32:46.786 13:32:56 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:46.786 13:32:56 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:32:46.786 ************************************ 00:32:46.786 END TEST bdev_fio_trim 00:32:46.786 ************************************ 00:32:46.786 13:32:56 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:32:46.786 13:32:56 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:46.786 13:32:56 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:32:46.786 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:32:46.786 13:32:56 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:32:46.786 00:32:46.786 real 0m22.640s 00:32:46.786 user 1m5.007s 00:32:46.786 sys 0m0.907s 00:32:46.786 13:32:56 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:46.786 13:32:56 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:46.786 ************************************ 00:32:46.786 END TEST bdev_fio 00:32:46.786 ************************************ 00:32:46.786 13:32:57 blockdev_crypto_sw -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:46.786 13:32:57 blockdev_crypto_sw -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:46.786 13:32:57 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:32:46.786 13:32:57 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:46.786 13:32:57 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:46.786 ************************************ 00:32:46.786 START TEST bdev_verify 00:32:46.786 ************************************ 00:32:46.786 13:32:57 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:46.786 [2024-07-25 13:32:57.093710] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:32:46.786 [2024-07-25 13:32:57.093766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064453 ] 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:46.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:46.786 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:46.786 [2024-07-25 13:32:57.226071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:47.043 [2024-07-25 13:32:57.313753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.043 [2024-07-25 13:32:57.313759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.043 [2024-07-25 13:32:57.472290] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:47.043 [2024-07-25 13:32:57.472344] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:47.043 [2024-07-25 13:32:57.472357] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:47.043 [2024-07-25 13:32:57.480311] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:47.043 [2024-07-25 13:32:57.480328] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:47.043 [2024-07-25 13:32:57.480339] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:47.043 [2024-07-25 13:32:57.488333] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:47.043 [2024-07-25 13:32:57.488349] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:47.043 [2024-07-25 13:32:57.488360] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:47.300 Running I/O for 5 seconds... 00:32:52.549 00:32:52.549 Latency(us) 00:32:52.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.549 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:52.549 Verification LBA range: start 0x0 length 0x800 00:32:52.549 crypto_ram : 5.02 5867.46 22.92 0.00 0.00 21722.52 1435.24 27682.41 00:32:52.549 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:52.549 Verification LBA range: start 0x800 length 0x800 00:32:52.549 crypto_ram : 5.02 5868.41 22.92 0.00 0.00 21719.26 1638.40 27682.41 00:32:52.549 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:52.549 Verification LBA range: start 0x0 length 0x800 00:32:52.549 crypto_ram3 : 5.02 2932.27 11.45 0.00 0.00 43405.20 7497.32 31876.71 00:32:52.549 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:52.549 Verification LBA range: start 0x800 length 0x800 00:32:52.549 crypto_ram3 : 5.03 2952.19 11.53 0.00 0.00 43112.03 1782.58 31876.71 00:32:52.549 =================================================================================================================== 00:32:52.549 Total : 17620.33 68.83 0.00 0.00 28920.70 1435.24 31876.71 00:32:52.549 00:32:52.549 real 0m5.748s 00:32:52.549 user 0m10.832s 00:32:52.549 sys 0m0.236s 00:32:52.549 13:33:02 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:52.550 13:33:02 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:32:52.550 ************************************ 00:32:52.550 END TEST bdev_verify 00:32:52.550 ************************************ 00:32:52.550 13:33:02 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:52.550 13:33:02 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:32:52.550 13:33:02 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:52.550 13:33:02 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:52.550 ************************************ 00:32:52.550 START TEST bdev_verify_big_io 00:32:52.550 ************************************ 00:32:52.550 13:33:02 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:52.550 [2024-07-25 13:33:02.921047] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:32:52.550 [2024-07-25 13:33:02.921101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065436 ] 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:52.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.550 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:52.807 [2024-07-25 13:33:03.051980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:52.807 [2024-07-25 13:33:03.137731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.807 [2024-07-25 13:33:03.137737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.064 [2024-07-25 13:33:03.301562] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:53.064 [2024-07-25 13:33:03.301613] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:53.064 [2024-07-25 13:33:03.301627] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:53.064 [2024-07-25 13:33:03.309583] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:53.064 [2024-07-25 13:33:03.309600] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:53.064 [2024-07-25 13:33:03.309610] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:53.064 [2024-07-25 13:33:03.317607] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:53.064 [2024-07-25 13:33:03.317623] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:53.064 [2024-07-25 13:33:03.317634] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:53.064 Running I/O for 5 seconds... 00:32:58.316 00:32:58.316 Latency(us) 00:32:58.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.316 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:58.316 Verification LBA range: start 0x0 length 0x80 00:32:58.316 crypto_ram : 5.11 550.83 34.43 0.00 0.00 227540.67 5452.60 322122.55 00:32:58.316 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:58.316 Verification LBA range: start 0x80 length 0x80 00:32:58.316 crypto_ram : 5.07 555.90 34.74 0.00 0.00 225566.96 5373.95 318767.10 00:32:58.316 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:58.316 Verification LBA range: start 0x0 length 0x80 00:32:58.316 crypto_ram3 : 5.22 294.33 18.40 0.00 0.00 412371.00 5216.67 330511.16 00:32:58.316 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:58.316 Verification LBA range: start 0x80 length 0x80 00:32:58.316 crypto_ram3 : 5.19 296.01 18.50 0.00 0.00 410084.86 5216.67 327155.71 00:32:58.316 =================================================================================================================== 00:32:58.316 Total : 1697.07 106.07 0.00 0.00 291732.91 5216.67 330511.16 00:32:58.573 00:32:58.573 real 0m5.942s 00:32:58.573 user 0m11.245s 00:32:58.573 sys 0m0.213s 00:32:58.573 13:33:08 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:58.573 13:33:08 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:32:58.573 ************************************ 00:32:58.573 END TEST bdev_verify_big_io 00:32:58.573 ************************************ 00:32:58.573 13:33:08 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:58.573 13:33:08 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:32:58.573 13:33:08 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:58.573 13:33:08 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:58.573 ************************************ 00:32:58.573 START TEST bdev_write_zeroes 00:32:58.573 ************************************ 00:32:58.573 13:33:08 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:58.573 [2024-07-25 13:33:08.948252] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:32:58.573 [2024-07-25 13:33:08.948306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066877 ] 00:32:58.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.573 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:58.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.573 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:58.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.573 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:58.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.573 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:58.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.573 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:58.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.573 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:58.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.573 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:58.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.573 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:58.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:58.574 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:58.830 [2024-07-25 13:33:09.080157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.830 [2024-07-25 13:33:09.162342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.087 [2024-07-25 13:33:09.331087] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:59.087 [2024-07-25 13:33:09.331144] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:59.087 [2024-07-25 13:33:09.331159] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:59.087 [2024-07-25 13:33:09.339107] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:59.087 [2024-07-25 13:33:09.339125] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:59.087 [2024-07-25 13:33:09.339136] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:59.087 [2024-07-25 13:33:09.347128] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:59.087 [2024-07-25 13:33:09.347151] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:59.087 [2024-07-25 13:33:09.347162] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:59.087 Running I/O for 1 seconds... 00:33:00.017 00:33:00.017 Latency(us) 00:33:00.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.017 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:00.017 crypto_ram : 1.01 28610.11 111.76 0.00 0.00 4464.09 1939.87 6212.81 00:33:00.017 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:00.017 crypto_ram3 : 1.01 14334.47 55.99 0.00 0.00 8868.09 3119.51 9279.90 00:33:00.017 =================================================================================================================== 00:33:00.017 Total : 42944.59 167.75 0.00 0.00 5936.43 1939.87 9279.90 00:33:00.274 00:33:00.274 real 0m1.713s 00:33:00.274 user 0m1.465s 00:33:00.274 sys 0m0.227s 00:33:00.274 13:33:10 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:00.274 13:33:10 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:33:00.274 ************************************ 00:33:00.274 END TEST bdev_write_zeroes 00:33:00.274 ************************************ 00:33:00.274 13:33:10 blockdev_crypto_sw -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:00.274 13:33:10 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:33:00.274 13:33:10 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:00.274 13:33:10 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:00.274 ************************************ 00:33:00.274 START TEST bdev_json_nonenclosed 00:33:00.274 ************************************ 00:33:00.274 13:33:10 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:00.274 [2024-07-25 13:33:10.747164] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:33:00.274 [2024-07-25 13:33:10.747224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067157 ] 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:00.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.532 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:00.532 [2024-07-25 13:33:10.880533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.532 [2024-07-25 13:33:10.964217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.532 [2024-07-25 13:33:10.964282] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:00.532 [2024-07-25 13:33:10.964299] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:00.532 [2024-07-25 13:33:10.964310] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:00.789 00:33:00.789 real 0m0.363s 00:33:00.789 user 0m0.210s 00:33:00.789 sys 0m0.150s 00:33:00.789 13:33:11 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:00.789 13:33:11 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:33:00.789 ************************************ 00:33:00.789 END TEST bdev_json_nonenclosed 00:33:00.789 ************************************ 00:33:00.789 13:33:11 blockdev_crypto_sw -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:00.789 13:33:11 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:33:00.789 13:33:11 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:00.789 13:33:11 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:00.789 ************************************ 00:33:00.789 START TEST bdev_json_nonarray 00:33:00.789 ************************************ 00:33:00.789 13:33:11 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:00.789 [2024-07-25 13:33:11.188997] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:33:00.789 [2024-07-25 13:33:11.189050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067300 ] 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:00.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:00.789 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:01.046 [2024-07-25 13:33:11.317929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.046 [2024-07-25 13:33:11.401808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.046 [2024-07-25 13:33:11.401879] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:01.046 [2024-07-25 13:33:11.401895] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:01.046 [2024-07-25 13:33:11.401906] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:01.046 00:33:01.046 real 0m0.356s 00:33:01.046 user 0m0.205s 00:33:01.046 sys 0m0.148s 00:33:01.046 13:33:11 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:01.046 13:33:11 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:33:01.046 ************************************ 00:33:01.046 END TEST bdev_json_nonarray 00:33:01.046 ************************************ 00:33:01.046 13:33:11 blockdev_crypto_sw -- bdev/blockdev.sh@786 -- # [[ crypto_sw == bdev ]] 00:33:01.046 13:33:11 blockdev_crypto_sw -- bdev/blockdev.sh@793 -- # [[ crypto_sw == gpt ]] 00:33:01.046 13:33:11 blockdev_crypto_sw -- bdev/blockdev.sh@797 -- # [[ crypto_sw == crypto_sw ]] 00:33:01.046 13:33:11 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:33:01.046 13:33:11 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:01.046 13:33:11 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:01.046 13:33:11 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:01.304 ************************************ 00:33:01.304 START TEST bdev_crypto_enomem 00:33:01.304 ************************************ 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1125 -- # bdev_crypto_enomem 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@634 -- # local base_dev=base0 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local test_dev=crypt0 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local err_dev=EE_base0 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local qd=32 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # ERR_PID=1067455 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # waitforlisten 1067455 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@831 -- # '[' -z 1067455 ']' 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:01.304 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.305 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:01.305 13:33:11 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:01.305 [2024-07-25 13:33:11.625960] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:33:01.305 [2024-07-25 13:33:11.626012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067455 ] 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:01.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:01.305 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:01.305 [2024-07-25 13:33:11.745043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.562 [2024-07-25 13:33:11.830863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@864 -- # return 0 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@644 -- # rpc_cmd 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:02.126 true 00:33:02.126 base0 00:33:02.126 true 00:33:02.126 [2024-07-25 13:33:12.570072] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:33:02.126 crypt0 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@651 -- # waitforbdev crypt0 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local bdev_name=crypt0 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@901 -- # local i 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.126 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:02.126 [ 00:33:02.126 { 00:33:02.126 "name": "crypt0", 00:33:02.126 "aliases": [ 00:33:02.126 "0289b964-2ed2-5990-8cfe-34d10494768d" 00:33:02.126 ], 00:33:02.126 "product_name": "crypto", 00:33:02.126 "block_size": 512, 00:33:02.126 "num_blocks": 2097152, 00:33:02.126 "uuid": "0289b964-2ed2-5990-8cfe-34d10494768d", 00:33:02.126 "assigned_rate_limits": { 00:33:02.126 "rw_ios_per_sec": 0, 00:33:02.126 "rw_mbytes_per_sec": 0, 00:33:02.126 "r_mbytes_per_sec": 0, 00:33:02.126 "w_mbytes_per_sec": 0 00:33:02.126 }, 00:33:02.126 "claimed": false, 00:33:02.126 "zoned": false, 00:33:02.126 "supported_io_types": { 00:33:02.126 "read": true, 00:33:02.126 "write": true, 00:33:02.126 "unmap": false, 00:33:02.126 "flush": false, 00:33:02.126 "reset": true, 00:33:02.126 "nvme_admin": false, 00:33:02.126 "nvme_io": false, 00:33:02.126 "nvme_io_md": false, 00:33:02.126 "write_zeroes": true, 00:33:02.126 "zcopy": false, 00:33:02.126 "get_zone_info": false, 00:33:02.126 "zone_management": false, 00:33:02.126 "zone_append": false, 00:33:02.127 "compare": false, 00:33:02.127 "compare_and_write": false, 00:33:02.127 "abort": false, 00:33:02.127 "seek_hole": false, 00:33:02.127 "seek_data": false, 00:33:02.127 "copy": false, 00:33:02.127 "nvme_iov_md": false 00:33:02.127 }, 00:33:02.127 "memory_domains": [ 00:33:02.127 { 00:33:02.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.127 "dma_device_type": 2 00:33:02.127 } 00:33:02.127 ], 00:33:02.127 "driver_specific": { 00:33:02.127 "crypto": { 00:33:02.127 "base_bdev_name": "EE_base0", 00:33:02.127 "name": "crypt0", 00:33:02.127 "key_name": "test_dek_sw" 00:33:02.127 } 00:33:02.127 } 00:33:02.127 } 00:33:02.127 ] 00:33:02.127 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.127 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@907 -- # return 0 00:33:02.127 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # rpcpid=1067531 00:33:02.127 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@656 -- # sleep 1 00:33:02.127 13:33:12 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:02.383 Running I/O for 5 seconds... 00:33:03.316 13:33:13 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:33:03.316 13:33:13 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.316 13:33:13 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:03.317 13:33:13 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.317 13:33:13 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@659 -- # wait 1067531 00:33:07.531 00:33:07.531 Latency(us) 00:33:07.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.531 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:33:07.531 crypt0 : 5.00 38990.26 152.31 0.00 0.00 817.25 394.85 1074.79 00:33:07.531 =================================================================================================================== 00:33:07.531 Total : 38990.26 152.31 0.00 0.00 817.25 394.85 1074.79 00:33:07.531 0 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@661 -- # rpc_cmd bdev_crypto_delete crypt0 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@663 -- # killprocess 1067455 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@950 -- # '[' -z 1067455 ']' 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # kill -0 1067455 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # uname 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1067455 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1067455' 00:33:07.531 killing process with pid 1067455 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@969 -- # kill 1067455 00:33:07.531 Received shutdown signal, test time was about 5.000000 seconds 00:33:07.531 00:33:07.531 Latency(us) 00:33:07.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.531 =================================================================================================================== 00:33:07.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@974 -- # wait 1067455 00:33:07.531 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # trap - SIGINT SIGTERM EXIT 00:33:07.531 00:33:07.531 real 0m6.418s 00:33:07.531 user 0m6.676s 00:33:07.531 sys 0m0.367s 00:33:07.532 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:07.532 13:33:17 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:33:07.532 ************************************ 00:33:07.532 END TEST bdev_crypto_enomem 00:33:07.532 ************************************ 00:33:07.790 13:33:18 blockdev_crypto_sw -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:33:07.790 13:33:18 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # cleanup 00:33:07.790 13:33:18 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:33:07.790 13:33:18 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:07.790 13:33:18 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:33:07.790 13:33:18 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:33:07.790 13:33:18 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:33:07.790 13:33:18 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:33:07.790 00:33:07.790 real 0m53.352s 00:33:07.790 user 1m49.228s 00:33:07.790 sys 0m6.282s 00:33:07.790 13:33:18 blockdev_crypto_sw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:07.790 13:33:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:33:07.790 ************************************ 00:33:07.790 END TEST blockdev_crypto_sw 00:33:07.790 ************************************ 00:33:07.790 13:33:18 -- spdk/autotest.sh@363 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:33:07.790 13:33:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:07.790 13:33:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:07.790 13:33:18 -- common/autotest_common.sh@10 -- # set +x 00:33:07.790 ************************************ 00:33:07.790 START TEST blockdev_crypto_qat 00:33:07.790 ************************************ 00:33:07.790 13:33:18 blockdev_crypto_qat -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:33:07.790 * Looking for test storage... 00:33:07.790 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # uname -s 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@681 -- # test_type=crypto_qat 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # crypto_device= 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # dek= 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # env_ctx= 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == bdev ]] 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == crypto_* ]] 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1068576 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:33:07.790 13:33:18 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 1068576 00:33:07.790 13:33:18 blockdev_crypto_qat -- common/autotest_common.sh@831 -- # '[' -z 1068576 ']' 00:33:07.790 13:33:18 blockdev_crypto_qat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.790 13:33:18 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:07.790 13:33:18 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.790 13:33:18 blockdev_crypto_qat -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:07.790 13:33:18 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:08.048 [2024-07-25 13:33:18.369444] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:33:08.048 [2024-07-25 13:33:18.369578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1068576 ] 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.048 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:08.048 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.049 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:08.049 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:08.049 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:08.306 [2024-07-25 13:33:18.575109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.306 [2024-07-25 13:33:18.659039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.873 13:33:19 blockdev_crypto_qat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:08.873 13:33:19 blockdev_crypto_qat -- common/autotest_common.sh@864 -- # return 0 00:33:08.873 13:33:19 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:33:08.873 13:33:19 blockdev_crypto_qat -- bdev/blockdev.sh@707 -- # setup_crypto_qat_conf 00:33:08.873 13:33:19 blockdev_crypto_qat -- bdev/blockdev.sh@169 -- # rpc_cmd 00:33:08.873 13:33:19 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.873 13:33:19 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:08.873 [2024-07-25 13:33:19.208871] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:08.873 [2024-07-25 13:33:19.216905] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:08.873 [2024-07-25 13:33:19.224922] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:08.873 [2024-07-25 13:33:19.296801] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:11.404 true 00:33:11.404 true 00:33:11.404 true 00:33:11.404 true 00:33:11.404 Malloc0 00:33:11.404 Malloc1 00:33:11.404 Malloc2 00:33:11.404 Malloc3 00:33:11.404 [2024-07-25 13:33:21.615128] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:11.404 crypto_ram 00:33:11.404 [2024-07-25 13:33:21.623151] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:11.404 crypto_ram1 00:33:11.404 [2024-07-25 13:33:21.631178] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:11.404 crypto_ram2 00:33:11.404 [2024-07-25 13:33:21.639198] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:11.404 crypto_ram3 00:33:11.404 [ 00:33:11.404 { 00:33:11.404 "name": "Malloc1", 00:33:11.404 "aliases": [ 00:33:11.404 "d028d2fb-8d65-4ad1-960a-8f5063263113" 00:33:11.404 ], 00:33:11.404 "product_name": "Malloc disk", 00:33:11.404 "block_size": 512, 00:33:11.404 "num_blocks": 65536, 00:33:11.404 "uuid": "d028d2fb-8d65-4ad1-960a-8f5063263113", 00:33:11.404 "assigned_rate_limits": { 00:33:11.404 "rw_ios_per_sec": 0, 00:33:11.404 "rw_mbytes_per_sec": 0, 00:33:11.404 "r_mbytes_per_sec": 0, 00:33:11.404 "w_mbytes_per_sec": 0 00:33:11.404 }, 00:33:11.404 "claimed": true, 00:33:11.404 "claim_type": "exclusive_write", 00:33:11.404 "zoned": false, 00:33:11.404 "supported_io_types": { 00:33:11.404 "read": true, 00:33:11.404 "write": true, 00:33:11.404 "unmap": true, 00:33:11.404 "flush": true, 00:33:11.404 "reset": true, 00:33:11.404 "nvme_admin": false, 00:33:11.404 "nvme_io": false, 00:33:11.404 "nvme_io_md": false, 00:33:11.404 "write_zeroes": true, 00:33:11.404 "zcopy": true, 00:33:11.404 "get_zone_info": false, 00:33:11.404 "zone_management": false, 00:33:11.404 "zone_append": false, 00:33:11.404 "compare": false, 00:33:11.404 "compare_and_write": false, 00:33:11.404 "abort": true, 00:33:11.404 "seek_hole": false, 00:33:11.404 "seek_data": false, 00:33:11.404 "copy": true, 00:33:11.404 "nvme_iov_md": false 00:33:11.404 }, 00:33:11.404 "memory_domains": [ 00:33:11.404 { 00:33:11.404 "dma_device_id": "system", 00:33:11.404 "dma_device_type": 1 00:33:11.404 }, 00:33:11.404 { 00:33:11.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:11.404 "dma_device_type": 2 00:33:11.404 } 00:33:11.404 ], 00:33:11.404 "driver_specific": {} 00:33:11.404 } 00:33:11.404 ] 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.404 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.404 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # cat 00:33:11.404 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.404 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.404 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.404 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:33:11.404 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.404 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:11.404 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.404 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:33:11.404 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r .name 00:33:11.404 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "03697f00-f95b-589b-ab67-3995a8129ec2"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "03697f00-f95b-589b-ab67-3995a8129ec2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "22ca80fc-740a-5613-b233-5c1e7357c157"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "22ca80fc-740a-5613-b233-5c1e7357c157",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "e6f29e64-1e0d-5e22-9bc2-c58c0f05d486"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e6f29e64-1e0d-5e22-9bc2-c58c0f05d486",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "3ed6a1a7-5adc-5427-889d-51eefafaf6f4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "3ed6a1a7-5adc-5427-889d-51eefafaf6f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:33:11.663 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:33:11.663 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:33:11.663 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:33:11.663 13:33:21 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # killprocess 1068576 00:33:11.663 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@950 -- # '[' -z 1068576 ']' 00:33:11.663 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # kill -0 1068576 00:33:11.663 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # uname 00:33:11.663 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:11.663 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1068576 00:33:11.663 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:11.663 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:11.663 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1068576' 00:33:11.663 killing process with pid 1068576 00:33:11.663 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@969 -- # kill 1068576 00:33:11.663 13:33:21 blockdev_crypto_qat -- common/autotest_common.sh@974 -- # wait 1068576 00:33:11.921 13:33:22 blockdev_crypto_qat -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:11.921 13:33:22 blockdev_crypto_qat -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:33:11.921 13:33:22 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:33:11.921 13:33:22 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:11.921 13:33:22 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:12.180 ************************************ 00:33:12.180 START TEST bdev_hello_world 00:33:12.180 ************************************ 00:33:12.180 13:33:22 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:33:12.180 [2024-07-25 13:33:22.498795] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:33:12.180 [2024-07-25 13:33:22.498851] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1069193 ] 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:12.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.180 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:12.180 [2024-07-25 13:33:22.629906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.439 [2024-07-25 13:33:22.712572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.439 [2024-07-25 13:33:22.733834] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:12.439 [2024-07-25 13:33:22.741862] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:12.439 [2024-07-25 13:33:22.749879] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:12.439 [2024-07-25 13:33:22.854583] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:14.970 [2024-07-25 13:33:25.016042] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:14.970 [2024-07-25 13:33:25.016095] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:14.970 [2024-07-25 13:33:25.016108] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:14.970 [2024-07-25 13:33:25.024061] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:14.970 [2024-07-25 13:33:25.024079] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:14.970 [2024-07-25 13:33:25.024090] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:14.970 [2024-07-25 13:33:25.032082] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:14.970 [2024-07-25 13:33:25.032099] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:14.970 [2024-07-25 13:33:25.032109] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:14.970 [2024-07-25 13:33:25.040103] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:14.970 [2024-07-25 13:33:25.040119] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:14.970 [2024-07-25 13:33:25.040129] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:14.970 [2024-07-25 13:33:25.111841] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:33:14.970 [2024-07-25 13:33:25.111880] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:33:14.970 [2024-07-25 13:33:25.111896] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:33:14.970 [2024-07-25 13:33:25.113089] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:33:14.970 [2024-07-25 13:33:25.113157] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:33:14.970 [2024-07-25 13:33:25.113173] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:33:14.970 [2024-07-25 13:33:25.113212] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:33:14.970 00:33:14.970 [2024-07-25 13:33:25.113229] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:33:14.970 00:33:14.970 real 0m2.978s 00:33:14.970 user 0m2.631s 00:33:14.970 sys 0m0.315s 00:33:14.970 13:33:25 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:14.970 13:33:25 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:33:14.970 ************************************ 00:33:14.970 END TEST bdev_hello_world 00:33:14.970 ************************************ 00:33:15.228 13:33:25 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:33:15.228 13:33:25 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:15.228 13:33:25 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:15.228 13:33:25 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:15.228 ************************************ 00:33:15.228 START TEST bdev_bounds 00:33:15.228 ************************************ 00:33:15.228 13:33:25 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:33:15.228 13:33:25 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=1069676 00:33:15.228 13:33:25 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:33:15.228 13:33:25 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:33:15.228 13:33:25 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 1069676' 00:33:15.228 Process bdevio pid: 1069676 00:33:15.228 13:33:25 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 1069676 00:33:15.228 13:33:25 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 1069676 ']' 00:33:15.228 13:33:25 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.228 13:33:25 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:15.228 13:33:25 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.228 13:33:25 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:15.228 13:33:25 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:15.228 [2024-07-25 13:33:25.558302] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:33:15.228 [2024-07-25 13:33:25.558349] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1069676 ] 00:33:15.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.228 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:15.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.228 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:15.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.228 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:15.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.228 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:15.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:15.229 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:15.229 [2024-07-25 13:33:25.675253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:15.487 [2024-07-25 13:33:25.761216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.487 [2024-07-25 13:33:25.761310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:15.487 [2024-07-25 13:33:25.761315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.487 [2024-07-25 13:33:25.782618] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:15.487 [2024-07-25 13:33:25.790640] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:15.487 [2024-07-25 13:33:25.798663] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:15.487 [2024-07-25 13:33:25.895344] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:18.015 [2024-07-25 13:33:28.055470] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:18.015 [2024-07-25 13:33:28.055551] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:18.015 [2024-07-25 13:33:28.055566] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:18.015 [2024-07-25 13:33:28.063488] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:18.015 [2024-07-25 13:33:28.063506] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:18.015 [2024-07-25 13:33:28.063517] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:18.015 [2024-07-25 13:33:28.071511] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:18.015 [2024-07-25 13:33:28.071526] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:18.015 [2024-07-25 13:33:28.071536] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:18.015 [2024-07-25 13:33:28.079532] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:18.015 [2024-07-25 13:33:28.079548] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:18.015 [2024-07-25 13:33:28.079558] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:18.015 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:18.015 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:33:18.015 13:33:28 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:18.015 I/O targets: 00:33:18.015 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:33:18.015 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:33:18.015 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:33:18.015 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:33:18.015 00:33:18.015 00:33:18.015 CUnit - A unit testing framework for C - Version 2.1-3 00:33:18.015 http://cunit.sourceforge.net/ 00:33:18.015 00:33:18.015 00:33:18.015 Suite: bdevio tests on: crypto_ram3 00:33:18.015 Test: blockdev write read block ...passed 00:33:18.015 Test: blockdev write zeroes read block ...passed 00:33:18.015 Test: blockdev write zeroes read no split ...passed 00:33:18.015 Test: blockdev write zeroes read split ...passed 00:33:18.015 Test: blockdev write zeroes read split partial ...passed 00:33:18.015 Test: blockdev reset ...passed 00:33:18.015 Test: blockdev write read 8 blocks ...passed 00:33:18.015 Test: blockdev write read size > 128k ...passed 00:33:18.015 Test: blockdev write read invalid size ...passed 00:33:18.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:18.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:18.015 Test: blockdev write read max offset ...passed 00:33:18.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:18.015 Test: blockdev writev readv 8 blocks ...passed 00:33:18.015 Test: blockdev writev readv 30 x 1block ...passed 00:33:18.015 Test: blockdev writev readv block ...passed 00:33:18.015 Test: blockdev writev readv size > 128k ...passed 00:33:18.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:18.015 Test: blockdev comparev and writev ...passed 00:33:18.015 Test: blockdev nvme passthru rw ...passed 00:33:18.015 Test: blockdev nvme passthru vendor specific ...passed 00:33:18.015 Test: blockdev nvme admin passthru ...passed 00:33:18.015 Test: blockdev copy ...passed 00:33:18.015 Suite: bdevio tests on: crypto_ram2 00:33:18.015 Test: blockdev write read block ...passed 00:33:18.015 Test: blockdev write zeroes read block ...passed 00:33:18.015 Test: blockdev write zeroes read no split ...passed 00:33:18.015 Test: blockdev write zeroes read split ...passed 00:33:18.015 Test: blockdev write zeroes read split partial ...passed 00:33:18.015 Test: blockdev reset ...passed 00:33:18.015 Test: blockdev write read 8 blocks ...passed 00:33:18.015 Test: blockdev write read size > 128k ...passed 00:33:18.015 Test: blockdev write read invalid size ...passed 00:33:18.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:18.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:18.015 Test: blockdev write read max offset ...passed 00:33:18.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:18.015 Test: blockdev writev readv 8 blocks ...passed 00:33:18.015 Test: blockdev writev readv 30 x 1block ...passed 00:33:18.015 Test: blockdev writev readv block ...passed 00:33:18.015 Test: blockdev writev readv size > 128k ...passed 00:33:18.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:18.015 Test: blockdev comparev and writev ...passed 00:33:18.015 Test: blockdev nvme passthru rw ...passed 00:33:18.015 Test: blockdev nvme passthru vendor specific ...passed 00:33:18.015 Test: blockdev nvme admin passthru ...passed 00:33:18.015 Test: blockdev copy ...passed 00:33:18.015 Suite: bdevio tests on: crypto_ram1 00:33:18.015 Test: blockdev write read block ...passed 00:33:18.015 Test: blockdev write zeroes read block ...passed 00:33:18.015 Test: blockdev write zeroes read no split ...passed 00:33:18.015 Test: blockdev write zeroes read split ...passed 00:33:18.015 Test: blockdev write zeroes read split partial ...passed 00:33:18.015 Test: blockdev reset ...passed 00:33:18.015 Test: blockdev write read 8 blocks ...passed 00:33:18.015 Test: blockdev write read size > 128k ...passed 00:33:18.015 Test: blockdev write read invalid size ...passed 00:33:18.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:18.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:18.015 Test: blockdev write read max offset ...passed 00:33:18.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:18.015 Test: blockdev writev readv 8 blocks ...passed 00:33:18.015 Test: blockdev writev readv 30 x 1block ...passed 00:33:18.015 Test: blockdev writev readv block ...passed 00:33:18.015 Test: blockdev writev readv size > 128k ...passed 00:33:18.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:18.015 Test: blockdev comparev and writev ...passed 00:33:18.015 Test: blockdev nvme passthru rw ...passed 00:33:18.015 Test: blockdev nvme passthru vendor specific ...passed 00:33:18.015 Test: blockdev nvme admin passthru ...passed 00:33:18.015 Test: blockdev copy ...passed 00:33:18.015 Suite: bdevio tests on: crypto_ram 00:33:18.015 Test: blockdev write read block ...passed 00:33:18.015 Test: blockdev write zeroes read block ...passed 00:33:18.015 Test: blockdev write zeroes read no split ...passed 00:33:18.015 Test: blockdev write zeroes read split ...passed 00:33:18.015 Test: blockdev write zeroes read split partial ...passed 00:33:18.015 Test: blockdev reset ...passed 00:33:18.015 Test: blockdev write read 8 blocks ...passed 00:33:18.015 Test: blockdev write read size > 128k ...passed 00:33:18.015 Test: blockdev write read invalid size ...passed 00:33:18.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:18.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:18.015 Test: blockdev write read max offset ...passed 00:33:18.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:18.015 Test: blockdev writev readv 8 blocks ...passed 00:33:18.015 Test: blockdev writev readv 30 x 1block ...passed 00:33:18.015 Test: blockdev writev readv block ...passed 00:33:18.015 Test: blockdev writev readv size > 128k ...passed 00:33:18.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:18.015 Test: blockdev comparev and writev ...passed 00:33:18.015 Test: blockdev nvme passthru rw ...passed 00:33:18.015 Test: blockdev nvme passthru vendor specific ...passed 00:33:18.015 Test: blockdev nvme admin passthru ...passed 00:33:18.015 Test: blockdev copy ...passed 00:33:18.015 00:33:18.015 Run Summary: Type Total Ran Passed Failed Inactive 00:33:18.016 suites 4 4 n/a 0 0 00:33:18.016 tests 92 92 92 0 0 00:33:18.016 asserts 520 520 520 0 n/a 00:33:18.016 00:33:18.016 Elapsed time = 0.495 seconds 00:33:18.016 0 00:33:18.016 13:33:28 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 1069676 00:33:18.273 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 1069676 ']' 00:33:18.273 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 1069676 00:33:18.273 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:33:18.273 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:18.273 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1069676 00:33:18.273 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:18.273 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:18.273 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1069676' 00:33:18.273 killing process with pid 1069676 00:33:18.273 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@969 -- # kill 1069676 00:33:18.273 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@974 -- # wait 1069676 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:33:18.531 00:33:18.531 real 0m3.369s 00:33:18.531 user 0m9.369s 00:33:18.531 sys 0m0.460s 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:18.531 ************************************ 00:33:18.531 END TEST bdev_bounds 00:33:18.531 ************************************ 00:33:18.531 13:33:28 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:33:18.531 13:33:28 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:18.531 13:33:28 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:18.531 13:33:28 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:18.531 ************************************ 00:33:18.531 START TEST bdev_nbd 00:33:18.531 ************************************ 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=1070282 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 1070282 /var/tmp/spdk-nbd.sock 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 1070282 ']' 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:18.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:18.531 13:33:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:18.789 [2024-07-25 13:33:29.021984] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:33:18.789 [2024-07-25 13:33:29.022039] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:18.789 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:18.789 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:18.789 [2024-07-25 13:33:29.152380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.789 [2024-07-25 13:33:29.238534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.790 [2024-07-25 13:33:29.259770] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:18.790 [2024-07-25 13:33:29.267792] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:18.790 [2024-07-25 13:33:29.275810] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:19.047 [2024-07-25 13:33:29.376262] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:21.589 [2024-07-25 13:33:31.533785] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:21.589 [2024-07-25 13:33:31.533843] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:21.589 [2024-07-25 13:33:31.533856] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:21.589 [2024-07-25 13:33:31.541806] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:21.589 [2024-07-25 13:33:31.541823] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:21.589 [2024-07-25 13:33:31.541834] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:21.589 [2024-07-25 13:33:31.549824] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:21.589 [2024-07-25 13:33:31.549840] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:21.589 [2024-07-25 13:33:31.549850] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:21.589 [2024-07-25 13:33:31.557844] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:21.589 [2024-07-25 13:33:31.557860] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:21.589 [2024-07-25 13:33:31.557870] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:21.589 1+0 records in 00:33:21.589 1+0 records out 00:33:21.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267077 s, 15.3 MB/s 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:21.589 13:33:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:21.847 1+0 records in 00:33:21.847 1+0 records out 00:33:21.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329645 s, 12.4 MB/s 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:21.847 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:22.137 1+0 records in 00:33:22.137 1+0 records out 00:33:22.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261985 s, 15.6 MB/s 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:22.137 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:22.425 1+0 records in 00:33:22.425 1+0 records out 00:33:22.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348355 s, 11.8 MB/s 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:22.425 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:22.684 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:22.684 { 00:33:22.684 "nbd_device": "/dev/nbd0", 00:33:22.684 "bdev_name": "crypto_ram" 00:33:22.684 }, 00:33:22.684 { 00:33:22.684 "nbd_device": "/dev/nbd1", 00:33:22.684 "bdev_name": "crypto_ram1" 00:33:22.684 }, 00:33:22.684 { 00:33:22.684 "nbd_device": "/dev/nbd2", 00:33:22.684 "bdev_name": "crypto_ram2" 00:33:22.684 }, 00:33:22.684 { 00:33:22.684 "nbd_device": "/dev/nbd3", 00:33:22.684 "bdev_name": "crypto_ram3" 00:33:22.684 } 00:33:22.684 ]' 00:33:22.684 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:22.684 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:22.684 { 00:33:22.684 "nbd_device": "/dev/nbd0", 00:33:22.684 "bdev_name": "crypto_ram" 00:33:22.684 }, 00:33:22.684 { 00:33:22.684 "nbd_device": "/dev/nbd1", 00:33:22.684 "bdev_name": "crypto_ram1" 00:33:22.684 }, 00:33:22.684 { 00:33:22.684 "nbd_device": "/dev/nbd2", 00:33:22.684 "bdev_name": "crypto_ram2" 00:33:22.684 }, 00:33:22.684 { 00:33:22.684 "nbd_device": "/dev/nbd3", 00:33:22.684 "bdev_name": "crypto_ram3" 00:33:22.684 } 00:33:22.684 ]' 00:33:22.684 13:33:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:22.684 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:33:22.684 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:22.684 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:33:22.684 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:22.684 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:22.684 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:22.684 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:22.942 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:22.942 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:22.942 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:22.942 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:22.942 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:22.942 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:22.942 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:22.942 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:22.942 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:22.942 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:23.200 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:23.200 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:23.200 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:23.200 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:23.200 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:23.200 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:23.200 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:23.200 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:23.200 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:23.200 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:33:23.458 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:33:23.458 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:33:23.458 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:33:23.458 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:23.458 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:23.458 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:33:23.458 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:23.458 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:23.458 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:23.458 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:33:23.716 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:33:23.716 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:33:23.716 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:33:23.716 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:23.716 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:23.716 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:33:23.716 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:23.716 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:23.716 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:23.716 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:23.716 13:33:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:23.975 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:33:24.234 /dev/nbd0 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:24.234 1+0 records in 00:33:24.234 1+0 records out 00:33:24.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295914 s, 13.8 MB/s 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:24.234 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:33:24.492 /dev/nbd1 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:24.492 1+0 records in 00:33:24.492 1+0 records out 00:33:24.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028434 s, 14.4 MB/s 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:24.492 13:33:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:33:24.750 /dev/nbd10 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:24.750 1+0 records in 00:33:24.750 1+0 records out 00:33:24.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286104 s, 14.3 MB/s 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:24.750 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:24.751 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:24.751 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:24.751 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:24.751 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:33:25.008 /dev/nbd11 00:33:25.008 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:33:25.008 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:33:25.008 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:33:25.008 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:25.008 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:25.008 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:25.008 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:33:25.008 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:25.008 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:25.008 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:25.009 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:25.009 1+0 records in 00:33:25.009 1+0 records out 00:33:25.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026652 s, 15.4 MB/s 00:33:25.009 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:25.009 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:25.009 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:25.009 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:25.009 13:33:35 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:25.009 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:25.009 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:25.009 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:25.009 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:25.009 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:25.267 { 00:33:25.267 "nbd_device": "/dev/nbd0", 00:33:25.267 "bdev_name": "crypto_ram" 00:33:25.267 }, 00:33:25.267 { 00:33:25.267 "nbd_device": "/dev/nbd1", 00:33:25.267 "bdev_name": "crypto_ram1" 00:33:25.267 }, 00:33:25.267 { 00:33:25.267 "nbd_device": "/dev/nbd10", 00:33:25.267 "bdev_name": "crypto_ram2" 00:33:25.267 }, 00:33:25.267 { 00:33:25.267 "nbd_device": "/dev/nbd11", 00:33:25.267 "bdev_name": "crypto_ram3" 00:33:25.267 } 00:33:25.267 ]' 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:25.267 { 00:33:25.267 "nbd_device": "/dev/nbd0", 00:33:25.267 "bdev_name": "crypto_ram" 00:33:25.267 }, 00:33:25.267 { 00:33:25.267 "nbd_device": "/dev/nbd1", 00:33:25.267 "bdev_name": "crypto_ram1" 00:33:25.267 }, 00:33:25.267 { 00:33:25.267 "nbd_device": "/dev/nbd10", 00:33:25.267 "bdev_name": "crypto_ram2" 00:33:25.267 }, 00:33:25.267 { 00:33:25.267 "nbd_device": "/dev/nbd11", 00:33:25.267 "bdev_name": "crypto_ram3" 00:33:25.267 } 00:33:25.267 ]' 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:25.267 /dev/nbd1 00:33:25.267 /dev/nbd10 00:33:25.267 /dev/nbd11' 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:25.267 /dev/nbd1 00:33:25.267 /dev/nbd10 00:33:25.267 /dev/nbd11' 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:25.267 256+0 records in 00:33:25.267 256+0 records out 00:33:25.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101539 s, 103 MB/s 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:25.267 256+0 records in 00:33:25.267 256+0 records out 00:33:25.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0763867 s, 13.7 MB/s 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:25.267 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:25.525 256+0 records in 00:33:25.525 256+0 records out 00:33:25.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0437898 s, 23.9 MB/s 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:33:25.525 256+0 records in 00:33:25.525 256+0 records out 00:33:25.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0510531 s, 20.5 MB/s 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:33:25.525 256+0 records in 00:33:25.525 256+0 records out 00:33:25.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.040585 s, 25.8 MB/s 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:25.525 13:33:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:25.800 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:25.800 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:25.800 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:25.800 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:25.800 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:25.800 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:25.800 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:25.800 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:25.800 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:25.800 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:26.059 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:26.059 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:26.059 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:26.059 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:26.059 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.059 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:26.059 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:26.059 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:26.059 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:26.059 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:33:26.317 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:33:26.317 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:33:26.317 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:33:26.317 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:26.317 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.317 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:33:26.317 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:26.317 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:26.317 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:26.317 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:33:26.576 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:33:26.576 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:33:26.576 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:33:26.576 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:26.576 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.576 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:33:26.576 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:26.576 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:26.576 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:26.576 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:26.576 13:33:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:26.834 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:26.834 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:26.834 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:26.834 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:26.834 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:26.834 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:26.834 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:26.834 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:26.834 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:26.834 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:33:26.834 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:26.835 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:33:26.835 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:33:26.835 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:26.835 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:26.835 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:26.835 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:26.835 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:27.092 malloc_lvol_verify 00:33:27.092 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:27.350 ea0ff404-2043-4704-8095-90dfec62c444 00:33:27.350 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:27.350 a344e178-92a7-42e1-b0b6-ba37303d3ec8 00:33:27.350 13:33:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:27.608 /dev/nbd0 00:33:27.608 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:27.608 mke2fs 1.46.5 (30-Dec-2021) 00:33:27.608 Discarding device blocks: 0/4096 done 00:33:27.608 Creating filesystem with 4096 1k blocks and 1024 inodes 00:33:27.608 00:33:27.608 Allocating group tables: 0/1 done 00:33:27.608 Writing inode tables: 0/1 done 00:33:27.608 Creating journal (1024 blocks): done 00:33:27.608 Writing superblocks and filesystem accounting information: 0/1 done 00:33:27.608 00:33:27.608 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:27.608 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:27.608 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:27.608 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:27.608 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:27.608 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:27.608 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:27.608 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 1070282 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 1070282 ']' 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 1070282 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:27.866 13:33:38 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1070282 00:33:28.123 13:33:38 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:28.123 13:33:38 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:28.123 13:33:38 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1070282' 00:33:28.123 killing process with pid 1070282 00:33:28.123 13:33:38 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@969 -- # kill 1070282 00:33:28.124 13:33:38 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@974 -- # wait 1070282 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:33:28.382 00:33:28.382 real 0m9.733s 00:33:28.382 user 0m12.727s 00:33:28.382 sys 0m3.813s 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:28.382 ************************************ 00:33:28.382 END TEST bdev_nbd 00:33:28.382 ************************************ 00:33:28.382 13:33:38 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:33:28.382 13:33:38 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = nvme ']' 00:33:28.382 13:33:38 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = gpt ']' 00:33:28.382 13:33:38 blockdev_crypto_qat -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:33:28.382 13:33:38 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:28.382 13:33:38 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:28.382 13:33:38 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:28.382 ************************************ 00:33:28.382 START TEST bdev_fio 00:33:28.382 ************************************ 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:33:28.382 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram1]' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram1 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:28.382 13:33:38 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:28.641 ************************************ 00:33:28.641 START TEST bdev_fio_rw_verify 00:33:28.641 ************************************ 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:28.641 13:33:38 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:28.899 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:28.899 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:28.899 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:28.899 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:28.899 fio-3.35 00:33:28.899 Starting 4 threads 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:28.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.899 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:43.782 00:33:43.782 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1072731: Thu Jul 25 13:33:51 2024 00:33:43.782 read: IOPS=21.8k, BW=85.2MiB/s (89.4MB/s)(852MiB/10001msec) 00:33:43.782 slat (usec): min=16, max=329, avg=61.31, stdev=46.84 00:33:43.782 clat (usec): min=20, max=1849, avg=344.93, stdev=274.81 00:33:43.782 lat (usec): min=56, max=2051, avg=406.23, stdev=306.89 00:33:43.782 clat percentiles (usec): 00:33:43.782 | 50.000th=[ 249], 99.000th=[ 1352], 99.900th=[ 1582], 99.990th=[ 1729], 00:33:43.782 | 99.999th=[ 1811] 00:33:43.782 write: IOPS=24.0k, BW=93.8MiB/s (98.4MB/s)(916MiB/9767msec); 0 zone resets 00:33:43.782 slat (usec): min=21, max=509, avg=75.05, stdev=49.85 00:33:43.782 clat (usec): min=18, max=2001, avg=390.98, stdev=297.45 00:33:43.782 lat (usec): min=62, max=2356, avg=466.02, stdev=331.82 00:33:43.782 clat percentiles (usec): 00:33:43.782 | 50.000th=[ 302], 99.000th=[ 1516], 99.900th=[ 1680], 99.990th=[ 1778], 00:33:43.782 | 99.999th=[ 1958] 00:33:43.782 bw ( KiB/s): min=78648, max=122720, per=98.10%, avg=94244.63, stdev=3518.70, samples=76 00:33:43.782 iops : min=19662, max=30680, avg=23561.16, stdev=879.68, samples=76 00:33:43.782 lat (usec) : 20=0.01%, 50=0.01%, 100=4.99%, 250=38.61%, 500=36.54% 00:33:43.782 lat (usec) : 750=9.76%, 1000=4.65% 00:33:43.782 lat (msec) : 2=5.44%, 4=0.01% 00:33:43.782 cpu : usr=99.56%, sys=0.00%, ctx=68, majf=0, minf=222 00:33:43.782 IO depths : 1=3.0%, 2=27.7%, 4=55.4%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:43.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.782 complete : 0=0.0%, 4=87.8%, 8=12.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.782 issued rwts: total=218200,234566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.782 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:43.782 00:33:43.782 Run status group 0 (all jobs): 00:33:43.782 READ: bw=85.2MiB/s (89.4MB/s), 85.2MiB/s-85.2MiB/s (89.4MB/s-89.4MB/s), io=852MiB (894MB), run=10001-10001msec 00:33:43.782 WRITE: bw=93.8MiB/s (98.4MB/s), 93.8MiB/s-93.8MiB/s (98.4MB/s-98.4MB/s), io=916MiB (961MB), run=9767-9767msec 00:33:43.782 00:33:43.782 real 0m13.407s 00:33:43.782 user 0m53.203s 00:33:43.782 sys 0m0.457s 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:33:43.782 ************************************ 00:33:43.782 END TEST bdev_fio_rw_verify 00:33:43.782 ************************************ 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "03697f00-f95b-589b-ab67-3995a8129ec2"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "03697f00-f95b-589b-ab67-3995a8129ec2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "22ca80fc-740a-5613-b233-5c1e7357c157"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "22ca80fc-740a-5613-b233-5c1e7357c157",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "e6f29e64-1e0d-5e22-9bc2-c58c0f05d486"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e6f29e64-1e0d-5e22-9bc2-c58c0f05d486",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "3ed6a1a7-5adc-5427-889d-51eefafaf6f4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "3ed6a1a7-5adc-5427-889d-51eefafaf6f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:33:43.782 crypto_ram1 00:33:43.782 crypto_ram2 00:33:43.782 crypto_ram3 ]] 00:33:43.782 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "03697f00-f95b-589b-ab67-3995a8129ec2"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "03697f00-f95b-589b-ab67-3995a8129ec2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "22ca80fc-740a-5613-b233-5c1e7357c157"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "22ca80fc-740a-5613-b233-5c1e7357c157",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "e6f29e64-1e0d-5e22-9bc2-c58c0f05d486"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e6f29e64-1e0d-5e22-9bc2-c58c0f05d486",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "3ed6a1a7-5adc-5427-889d-51eefafaf6f4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "3ed6a1a7-5adc-5427-889d-51eefafaf6f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram1]' 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram1 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:43.783 ************************************ 00:33:43.783 START TEST bdev_fio_trim 00:33:43.783 ************************************ 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:43.783 13:33:52 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:43.783 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:43.783 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:43.783 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:43.783 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:43.783 fio-3.35 00:33:43.783 Starting 4 threads 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.783 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:43.783 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:43.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:43.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:43.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:43.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:43.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:43.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:43.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:43.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:43.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:43.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:43.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:43.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.784 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:56.035 00:33:56.035 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1075168: Thu Jul 25 13:34:05 2024 00:33:56.035 write: IOPS=41.3k, BW=161MiB/s (169MB/s)(1614MiB/10001msec); 0 zone resets 00:33:56.035 slat (usec): min=15, max=407, avg=57.59, stdev=37.49 00:33:56.035 clat (usec): min=39, max=1365, avg=203.05, stdev=124.65 00:33:56.035 lat (usec): min=55, max=1528, avg=260.64, stdev=146.22 00:33:56.035 clat percentiles (usec): 00:33:56.035 | 50.000th=[ 178], 99.000th=[ 644], 99.900th=[ 750], 99.990th=[ 824], 00:33:56.035 | 99.999th=[ 1172] 00:33:56.035 bw ( KiB/s): min=158464, max=191488, per=100.00%, avg=165482.11, stdev=1957.13, samples=76 00:33:56.035 iops : min=39616, max=47872, avg=41370.53, stdev=489.28, samples=76 00:33:56.035 trim: IOPS=41.3k, BW=161MiB/s (169MB/s)(1614MiB/10001msec); 0 zone resets 00:33:56.035 slat (usec): min=5, max=380, avg=14.68, stdev= 6.35 00:33:56.035 clat (usec): min=6, max=1529, avg=260.85, stdev=146.24 00:33:56.035 lat (usec): min=11, max=1560, avg=275.53, stdev=148.76 00:33:56.035 clat percentiles (usec): 00:33:56.035 | 50.000th=[ 225], 99.000th=[ 766], 99.900th=[ 898], 99.990th=[ 979], 00:33:56.035 | 99.999th=[ 1450] 00:33:56.035 bw ( KiB/s): min=158464, max=191488, per=100.00%, avg=165482.11, stdev=1957.13, samples=76 00:33:56.035 iops : min=39616, max=47872, avg=41370.53, stdev=489.28, samples=76 00:33:56.035 lat (usec) : 10=0.01%, 50=1.12%, 100=11.33%, 250=54.73%, 500=27.01% 00:33:56.035 lat (usec) : 750=5.18%, 1000=0.63% 00:33:56.035 lat (msec) : 2=0.01% 00:33:56.035 cpu : usr=99.61%, sys=0.00%, ctx=49, majf=0, minf=90 00:33:56.035 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.035 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.035 issued rwts: total=0,413228,413229,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.035 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:56.035 00:33:56.035 Run status group 0 (all jobs): 00:33:56.035 WRITE: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=1614MiB (1693MB), run=10001-10001msec 00:33:56.035 TRIM: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=1614MiB (1693MB), run=10001-10001msec 00:33:56.035 00:33:56.035 real 0m13.423s 00:33:56.035 user 0m53.446s 00:33:56.035 sys 0m0.467s 00:33:56.035 13:34:05 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:56.035 13:34:05 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:33:56.035 ************************************ 00:33:56.035 END TEST bdev_fio_trim 00:33:56.035 ************************************ 00:33:56.035 13:34:05 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:33:56.035 13:34:05 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:56.035 13:34:05 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:33:56.035 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:56.035 13:34:05 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:33:56.035 00:33:56.035 real 0m27.183s 00:33:56.035 user 1m46.839s 00:33:56.035 sys 0m1.109s 00:33:56.036 13:34:05 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:56.036 13:34:05 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:56.036 ************************************ 00:33:56.036 END TEST bdev_fio 00:33:56.036 ************************************ 00:33:56.036 13:34:05 blockdev_crypto_qat -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:56.036 13:34:05 blockdev_crypto_qat -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:56.036 13:34:05 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:33:56.036 13:34:05 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:56.036 13:34:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:56.036 ************************************ 00:33:56.036 START TEST bdev_verify 00:33:56.036 ************************************ 00:33:56.036 13:34:06 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:56.036 [2024-07-25 13:34:06.090131] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:33:56.036 [2024-07-25 13:34:06.090190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076804 ] 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:56.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.036 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:56.036 [2024-07-25 13:34:06.222546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:56.036 [2024-07-25 13:34:06.307680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.036 [2024-07-25 13:34:06.307685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.036 [2024-07-25 13:34:06.329007] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:56.036 [2024-07-25 13:34:06.337037] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:56.036 [2024-07-25 13:34:06.345061] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:56.036 [2024-07-25 13:34:06.449770] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:58.566 [2024-07-25 13:34:08.608823] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:58.566 [2024-07-25 13:34:08.608895] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:58.566 [2024-07-25 13:34:08.608908] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:58.566 [2024-07-25 13:34:08.616842] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:58.566 [2024-07-25 13:34:08.616859] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:58.566 [2024-07-25 13:34:08.616870] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:58.566 [2024-07-25 13:34:08.624864] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:58.566 [2024-07-25 13:34:08.624880] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:58.566 [2024-07-25 13:34:08.624891] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:58.566 [2024-07-25 13:34:08.632883] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:58.566 [2024-07-25 13:34:08.632899] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:58.566 [2024-07-25 13:34:08.632910] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:58.566 Running I/O for 5 seconds... 00:34:03.830 00:34:03.830 Latency(us) 00:34:03.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.830 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:03.830 Verification LBA range: start 0x0 length 0x1000 00:34:03.830 crypto_ram : 5.06 512.54 2.00 0.00 0.00 248657.84 1730.15 160222.41 00:34:03.830 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:03.830 Verification LBA range: start 0x1000 length 0x1000 00:34:03.830 crypto_ram : 5.06 513.80 2.01 0.00 0.00 247882.75 2372.40 159383.55 00:34:03.830 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:03.830 Verification LBA range: start 0x0 length 0x1000 00:34:03.830 crypto_ram1 : 5.06 513.98 2.01 0.00 0.00 247375.20 1979.19 145961.78 00:34:03.830 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:03.830 Verification LBA range: start 0x1000 length 0x1000 00:34:03.830 crypto_ram1 : 5.06 518.34 2.02 0.00 0.00 245420.23 2319.97 145961.78 00:34:03.830 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:03.830 Verification LBA range: start 0x0 length 0x1000 00:34:03.830 crypto_ram2 : 5.05 4032.41 15.75 0.00 0.00 31475.55 4325.38 27892.12 00:34:03.830 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:03.830 Verification LBA range: start 0x1000 length 0x1000 00:34:03.830 crypto_ram2 : 5.04 4036.62 15.77 0.00 0.00 31432.58 6658.46 27892.12 00:34:03.830 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:03.830 Verification LBA range: start 0x0 length 0x1000 00:34:03.830 crypto_ram3 : 5.05 4031.09 15.75 0.00 0.00 31394.82 4089.45 27682.41 00:34:03.830 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:03.830 Verification LBA range: start 0x1000 length 0x1000 00:34:03.830 crypto_ram3 : 5.05 4053.92 15.84 0.00 0.00 31218.26 2031.62 27892.12 00:34:03.830 =================================================================================================================== 00:34:03.830 Total : 18212.69 71.14 0.00 0.00 55841.94 1730.15 160222.41 00:34:03.830 00:34:03.830 real 0m8.090s 00:34:03.830 user 0m15.398s 00:34:03.830 sys 0m0.330s 00:34:03.830 13:34:14 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:03.830 13:34:14 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:34:03.830 ************************************ 00:34:03.830 END TEST bdev_verify 00:34:03.830 ************************************ 00:34:03.830 13:34:14 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:03.830 13:34:14 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:34:03.830 13:34:14 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:03.830 13:34:14 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:03.830 ************************************ 00:34:03.830 START TEST bdev_verify_big_io 00:34:03.830 ************************************ 00:34:03.830 13:34:14 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:03.830 [2024-07-25 13:34:14.263387] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:03.830 [2024-07-25 13:34:14.263441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078133 ] 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:04.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:04.089 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:04.089 [2024-07-25 13:34:14.395306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:04.089 [2024-07-25 13:34:14.479640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.089 [2024-07-25 13:34:14.479645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.089 [2024-07-25 13:34:14.501067] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:34:04.089 [2024-07-25 13:34:14.509094] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:34:04.090 [2024-07-25 13:34:14.517117] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:34:04.348 [2024-07-25 13:34:14.614567] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:06.876 [2024-07-25 13:34:16.768971] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:06.876 [2024-07-25 13:34:16.769057] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:06.876 [2024-07-25 13:34:16.769070] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:06.876 [2024-07-25 13:34:16.776975] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:06.876 [2024-07-25 13:34:16.776993] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:06.876 [2024-07-25 13:34:16.777004] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:06.876 [2024-07-25 13:34:16.784998] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:06.876 [2024-07-25 13:34:16.785018] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:06.876 [2024-07-25 13:34:16.785028] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:06.876 [2024-07-25 13:34:16.793018] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:06.876 [2024-07-25 13:34:16.793034] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:06.876 [2024-07-25 13:34:16.793044] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:06.876 Running I/O for 5 seconds... 00:34:07.446 [2024-07-25 13:34:17.657309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.657715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.657788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.657846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.657897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.657935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.658293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.658312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.661948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.662007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.662066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.662106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.662517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.662569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.662607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.662645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.663008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.663024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.666255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.666298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.666336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.666375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.666765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.666805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.666861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.666899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.667332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.667352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.670558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.670613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.670651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.670689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.671098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.671153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.671193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.671231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.671644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.671660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.674675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.674718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.674776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.674814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.675304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.675350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.675389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.675427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.675823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.675839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.678794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.446 [2024-07-25 13:34:17.678836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.678873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.678912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.679351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.679393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.679431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.679469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.679854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.679869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.682857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.682900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.682939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.682976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.683408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.683449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.683488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.683525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.683907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.683923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.686943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.686987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.687026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.687064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.687503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.687561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.687600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.687638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.688039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.688056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.691191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.691236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.691280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.691318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.691754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.691795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.691833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.691871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.692277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.692293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.695350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.695394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.695432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.695469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.695902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.695943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.695981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.696020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.696424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.696440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.699403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.699446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.699486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.699524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.699944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.699984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.700027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.700066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.700417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.700432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.703456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.703499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.703541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.703579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.704007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.704047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.704101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.704144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.704513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.704529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.707536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.707579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.707621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.707659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.708047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.708090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.708128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.708172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.708528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.447 [2024-07-25 13:34:17.708544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.711745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.711792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.711839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.711878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.712333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.712374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.712425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.712479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.712815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.712831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.715710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.715763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.715800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.715838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.716195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.716236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.716275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.716313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.716734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.716749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.719707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.719761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.719805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.719857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.720323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.720364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.720402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.720440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.720819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.720835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.723591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.723634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.723675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.723712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.724117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.724165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.724204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.724245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.724650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.724666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.727365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.727408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.727446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.727483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.727917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.727958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.727996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.728034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.728435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.728450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.731270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.731313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.731353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.731391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.731826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.731867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.731906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.731944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.732294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.732309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.735117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.735165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.735203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.735241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.735650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.735690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.735727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.735764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.736192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.736209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.739014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.739056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.739098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.739144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.739549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.739589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.739627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.739670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.740063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.740080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.742956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.742998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.448 [2024-07-25 13:34:17.743036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.743073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.743498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.743539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.743577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.743615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.744005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.744020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.746563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.746606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.746644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.746686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.747118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.747164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.747203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.747242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.747597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.747613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.750350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.750392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.750432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.750471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.750889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.750941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.750979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.751035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.751389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.751405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.754088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.754134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.754184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.754222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.754579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.754620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.754659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.754696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.755061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.755077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.758094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.758153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.758203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.758241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.758630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.758672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.758711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.758749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.759102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.759125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.761870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.761912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.761959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.762005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.762382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.762438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.762476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.762513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.762937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.762954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.765529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.765573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.765611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.765648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.766098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.766144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.766183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.766221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.766617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.766633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.769161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.769203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.769240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.769277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.769702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.769741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.769779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.449 [2024-07-25 13:34:17.769818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.770167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.770186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.772291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.772334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.772371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.772415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.772692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.772732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.772776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.772815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.773160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.773176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.774913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.774956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.774994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.775031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.775452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.775493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.775533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.775574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.775911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.775926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.777893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.777935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.777972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.778001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.778287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.778328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.778365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.778402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.778775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.778790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.780833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.781199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.781558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.781914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.783428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.784934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.786436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.787300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.787550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.787565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.789663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.790022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.790383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.791048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.792837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.794492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.796084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.797111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.797412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.797427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.799626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.799991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.800353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.801917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.803721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.805236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.805942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.807331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.807580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.807595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.810004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.810374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.811107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.812380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.814273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.815787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.816891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.818168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.818417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.818433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.821113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.821478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.823127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.824618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.826379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.827064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.828426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.829935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.830188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.830205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.832767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.450 [2024-07-25 13:34:17.833859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.835118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.836584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.838082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.839474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.840739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.842230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.842478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.842494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.845347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.846655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.848165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.849672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.850762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.852001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.853494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.855002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.855297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.855313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.859030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.860297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.861803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.863305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.865376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.866960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.868645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.870190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.870634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.870650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.874278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.875792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.877304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.878210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.879725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.881249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.882752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.883271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.883700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.883717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.887564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.889114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.890562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.891734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.893512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.895028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.896086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.896450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.896857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.896873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.900412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.901918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.902623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.903892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.905669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.907324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.907692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.908047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.908414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.908430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.911866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.913054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.914565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.915907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.917674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.451 [2024-07-25 13:34:17.918447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.918805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.919169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.919639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.919655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.922837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.923629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.924905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.926415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.928101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.928466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.928823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.929182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.929602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.452 [2024-07-25 13:34:17.929618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.932143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.933833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.935367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.936969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.937829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.938192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.938548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.938903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.939322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.939339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.941717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.942977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.944492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.946001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.946654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.947012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.947372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.947728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.947992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.948008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.713 [2024-07-25 13:34:17.951130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.952817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.954373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.955795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.956527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.956885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.957245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.958137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.958437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.958453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.961263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.962770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.964276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.964802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.965569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.965943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.966306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.967816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.968071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.968086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.971203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.972705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.973935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.974300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.975063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.975426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.976502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.977739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.977987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.978002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.981026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.982535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.983002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.983370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.984115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.984498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.985907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.987458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.987709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.987725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.990800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.991912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.992280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.992638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.993429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.994754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.996023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.997531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.997782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:17.997798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.000906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.001281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.001639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.001996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.003127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.004403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.005898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.007407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.007683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.007699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.010007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.010385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.010742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.011098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.013147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.014624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.016195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.017861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.018216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.018231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.020015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.020380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.020737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.021094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.022624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.024129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.025639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.026393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.026645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.026660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.028536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.028897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.029264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.714 [2024-07-25 13:34:18.030118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.031986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.033498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.034849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.036128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.036445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.036461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.038492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.038866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.039252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.040652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.042386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.043885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.044603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.045880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.046131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.046151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.048298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.048658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.049897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.051165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.052917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.053930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.055537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.056986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.057244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.057260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.059655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.060385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.061638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.063153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.064942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.066046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.067317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.068822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.069073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.069088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.071517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.073103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.074531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.076051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.077001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.078381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.079889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.081404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.081655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.081672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.084937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.086200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.087601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.088483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.090015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.091448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.092480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.092840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.093255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.093275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.095728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.096088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.096448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.096809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.097530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.097892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.098255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.098611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.098997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.099012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.101546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.101910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.102274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.102314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.103038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.103402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.103762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.104121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.104462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.104479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.107238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.107605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.107965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.108326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.108369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.108771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.715 [2024-07-25 13:34:18.109135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.109505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.109871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.110237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.110593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.110609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.112743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.112786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.112823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.112863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.113264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.113309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.113347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.113388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.113439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.113862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.113878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.115978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.116021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.116059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.116104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.116478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.116524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.116562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.116599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.116637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.117026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.117043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.119206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.119248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.119287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.119325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.119731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.119778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.119817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.119858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.119896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.120298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.120325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.122553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.122595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.122633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.122670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.123073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.123119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.123163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.123202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.123243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.123561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.123577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.125708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.125755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.125794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.125833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.126227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.126276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.126316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.126365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.126405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.126799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.126814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.128985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.129028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.129066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.129103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.129453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.129511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.129551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.129590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.129628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.129973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.129989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.132255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.132297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.132354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.132392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.132753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.132822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.132872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.132923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.132977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.133351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.133367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.135682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.716 [2024-07-25 13:34:18.135735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.135777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.135815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.136145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.136202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.136242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.136281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.136318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.136754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.136769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.138956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.139008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.139048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.139102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.139476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.139537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.139576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.139613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.139651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.140037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.140053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.142154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.142196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.142235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.142273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.142689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.142736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.142774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.142816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.142854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.143261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.143277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.145554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.145595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.145632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.145670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.146063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.146107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.146150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.146188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.146228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.146607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.146623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.148781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.148823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.148864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.148905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.149320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.149364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.149418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.149456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.149494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.149923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.149939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.152143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.152184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.152222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.152260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.152627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.152675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.152713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.152751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.152789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.153201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.153218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.155335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.155376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.155423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.155460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.155887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.155933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.155972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.156011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.156048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.156427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.156443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.158708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.158749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.158787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.158825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.159231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.159276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.159314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.717 [2024-07-25 13:34:18.159353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.159390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.159744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.159759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.161909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.161950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.161996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.162037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.162427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.162483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.162534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.162602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.162643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.163025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.163040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.165341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.165383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.165421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.165459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.165801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.165859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.165899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.165938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.165986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.166336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.166352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.169011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.169057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.169116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.169170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.169547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.169625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.169676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.169714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.169752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.170151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.170167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.172420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.172462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.172499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.172551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.172888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.172944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.172992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.173030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.173068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.173492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.173511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.175669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.175729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.175782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.175821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.176192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.176238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.176276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.176314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.176353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.176760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.176777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.178928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.178971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.179021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.179059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.179521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.179571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.718 [2024-07-25 13:34:18.179610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.179647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.179690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.180024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.180039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.182263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.182304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.182342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.182380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.182786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.182831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.182870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.182908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.182950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.183387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.183403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.185198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.185240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.185287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.185337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.185692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.185737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.185776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.185814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.185851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.186246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.186262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.188137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.188183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.188226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.188263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.188510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.188562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.188604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.188641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.188684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.188931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.188946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.190522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.190563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.190600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.190637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.190921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.190977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.191017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.191056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.191108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.191542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.191559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.193561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.193604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.193651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.193689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.193935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.193986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.194027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.194064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.194102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.194349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.194364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.195901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.195941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.195979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.196020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.196269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.196320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.196357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.196395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.196435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.196822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.719 [2024-07-25 13:34:18.196838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.199050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.199091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.199131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.199172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.199458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.199509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.199548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.199585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.199622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.199867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.199883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.201442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.201483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.201538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.201577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.201822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.201876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.201917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.201954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.201991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.202258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.202274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.204527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.204572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.204612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.204653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.204902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.204953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.204993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.205045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.205083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.205332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.205348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.980 [2024-07-25 13:34:18.206885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.206925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.208441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.208483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.208731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.208783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.208821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.208858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.208895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.209246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.209262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.211763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.211804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.211841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.213102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.213356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.213410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.213449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.213486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.213524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.213776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.213792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.216818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.217631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.217992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.218354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.218775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.219143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.220571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.222161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.223662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.223909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.223924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.226910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.227285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.227646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.228002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.228415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.229653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.230901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.232402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.233924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.234272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.234288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.236245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.236606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.236963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.237322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.237661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.238863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.240357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.241857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.243121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.243440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.243456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.245235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.245596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.245952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.246312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.246561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.247827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.249330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.250827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.251536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.251784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.251801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.253628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.253990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.254352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.255395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.255696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.257228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.258740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.259868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.261361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.261646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.261661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.263654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.264021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.264470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.265802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.266051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.267575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.981 [2024-07-25 13:34:18.269172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.270069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.271351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.271600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.271616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.273776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.274135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.275602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.276904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.277157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.278679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.279451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.280938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.282608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.282856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.282871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.285114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.285895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.287157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.288669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.288916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.290370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.291568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.292828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.294332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.294580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.294596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.297052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.298601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.300280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.301867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.302117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.302824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.304094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.305605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.307111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.307371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.307387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.310916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.312182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.313691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.315199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.315573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.317099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.318770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.320369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.321869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.322238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.322254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.325695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.327202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.328694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.329644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.329893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.331165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.332661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.334161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.334703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.335128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.335148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.338668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.340291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.341785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.342917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.343206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.344730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.346236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.347320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.347680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.348096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.348112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.351426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.352929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.353635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.354901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.355155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.356696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.358330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.358694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.359050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.359443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.359459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.982 [2024-07-25 13:34:18.362751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.363940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.365383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.366675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.366924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.368460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.369203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.369560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.369919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.370352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.370369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.373384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.374145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.375412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.376910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.377161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.378620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.378978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.379338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.379694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.380104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.380121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.382523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.384186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.385692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.387297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.387547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.388106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.388467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.388824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.389187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.389519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.389534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.391796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.393069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.394576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.396088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.396379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.396752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.397113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.397486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.397851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.398098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.398113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.401061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.402671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.404184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.405608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.405982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.406352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.406708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.407064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.408361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.408667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.408682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.411399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.412897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.414399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.414919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.415366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.415732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.416087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.416735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.418003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.418255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.418270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.421277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.422797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.423931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.424300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.424706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.425072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.425432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.427034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.428509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.428757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.428772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.431772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.433383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.433758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.434115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.434508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.434871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.983 [2024-07-25 13:34:18.435909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.437175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.438678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.438925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.438940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.441928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.442731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.443089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.443448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.443880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.444246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.445675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.447265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.448769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.449018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.449033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.451999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.452367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.452724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.453081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.453485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.454751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.456020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.457536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.459045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.459421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.459436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.461402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.461766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.462123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.462483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.462812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.464084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:07.984 [2024-07-25 13:34:18.465594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.466312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.467987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.468330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.468346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.470056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.470423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.470779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.471136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.471423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.472694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.474208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.475710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.476546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.476794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.476813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.478675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.479034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.479393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.479752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.480001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.481538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.483164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.483531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.485087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.485339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.485355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.487675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.488037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.488404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.488764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.489199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.489561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.489918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.490281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.247 [2024-07-25 13:34:18.490643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.491053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.491069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.493549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.493929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.494289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.494646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.495046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.495421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.495793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.496159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.496520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.496944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.496960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.499384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.499746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.500107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.500473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.500857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.501226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.501582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.501937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.502300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.502677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.502692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.505251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.505619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.505991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.506351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.506794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.507165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.507528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.507893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.508255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.508636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.508650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.511216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.511575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.511617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.511972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.512327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.512703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.513062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.513422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.513778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.514189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.514206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.516623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.516982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.517348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.517400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.517730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.518100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.518460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.518816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.519193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.519542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.519558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.521791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.521832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.521871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.521909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.522262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.522318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.522357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.522395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.522433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.522792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.522809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.525064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.525106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.525162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.525200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.525552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.525616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.525654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.525705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.525755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.526187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.526204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.528493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.528548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.248 [2024-07-25 13:34:18.528587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.528625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.528956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.529012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.529051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.529089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.529126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.529537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.529553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.531727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.531768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.531817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.531856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.532318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.532386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.532424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.532462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.532499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.532913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.532930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.535058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.535102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.535144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.535182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.535547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.535595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.535633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.535672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.535709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.536120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.536137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.538417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.538466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.538503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.538541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.538958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.539003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.539041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.539080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.539118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.539491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.539507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.541617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.541658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.541699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.541737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.542136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.542185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.542224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.542277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.542319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.542752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.542768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.544898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.544940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.544981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.545019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.545380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.545425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.545463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.545500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.545538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.545938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.545956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.548048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.548090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.548128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.548172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.548630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.548679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.548718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.548756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.548794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.549156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.549172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.551365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.551406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.551444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.551481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.551882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.551926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.551969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.249 [2024-07-25 13:34:18.552007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.552045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.552384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.552400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.554483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.554524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.554562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.554600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.554989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.555044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.555095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.555149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.555200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.555581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.555596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.557874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.557915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.557957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.557994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.558315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.558370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.558410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.558447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.558495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.558828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.558843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.561271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.561323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.561365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.561420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.561848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.561909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.561959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.562009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.562049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.562413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.562428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.564604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.564645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.564683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.564720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.565080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.565135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.565177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.565219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.565257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.565686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.565701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.567851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.567909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.567962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.568011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.568408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.568454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.568492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.568530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.568567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.568968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.568984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.570901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.570947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.570989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.571038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.571299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.571353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.571390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.571428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.571472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.571831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.571847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.574072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.574125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.574167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.574205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.574521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.574580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.574620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.574658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.574696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.575088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.575103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.577198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.577238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.250 [2024-07-25 13:34:18.577282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.577320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.577567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.577612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.577652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.577699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.577736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.577979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.577997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.579506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.579546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.579583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.579620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.579861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.579912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.579950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.579987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.580023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.580379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.580395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.583008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.583048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.583089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.583126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.583406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.583457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.583496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.583533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.583569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.583809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.583825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.585283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.585323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.585360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.585397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.585642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.585697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.585735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.585779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.585826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.586070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.586085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.588287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.588329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.588367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.588405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.588698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.588745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.588783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.588820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.588856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.589128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.589147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.590620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.590667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.590704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.590747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.590993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.591036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.591090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.591130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.591172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.591415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.591430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.593449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.593490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.593532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.593571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.593995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.594043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.594082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.594124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.594165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.594413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.594428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.595903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.595942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.595979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.596016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.596313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.596373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.596411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.251 [2024-07-25 13:34:18.596449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.596485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.596726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.596741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.598589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.598630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.598681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.598729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.599177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.599223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.599262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.599300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.599338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.599704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.599719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.601121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.601173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.601216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.601257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.601513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.601560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.601598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.601635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.601674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.601941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.601956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.603657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.603699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.603741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.603779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.604131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.604180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.604217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.604254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.604291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.604688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.604704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.606157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.606197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.606233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.606270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.606615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.606670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.606726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.606766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.606803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.607051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.607068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.608590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.608631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.608669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.608705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.609115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.609164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.609203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.609241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.609279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.609675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.609691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.611336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.611376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.612883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.612924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.613325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.613380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.613418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.613455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.613491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.613797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.252 [2024-07-25 13:34:18.613812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.615384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.615424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.615462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.615819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.616200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.616246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.616285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.616324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.616366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.616757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.616773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.618923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.620201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.621698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.623211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.623461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.623830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.624193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.624551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.624909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.625201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.625217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.627818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.629092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.630613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.632124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.632472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.632848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.633207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.633563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.634232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.634484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.634499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.637309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.638828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.640339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.641240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.641665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.642035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.642400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.642759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.644451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.644701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.644716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.647549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.649069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.650682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.651048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.651478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.651843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.652203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.653224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.654492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.654743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.654758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.657755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.659254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.659848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.660213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.660634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.661009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.661401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.662795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.664334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.664583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.664598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.667597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.668997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.669359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.669716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.670094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.670462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.671752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.673008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.674517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.674765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.674781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.677812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.678187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.678547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.678905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.679332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.253 [2024-07-25 13:34:18.679997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.681261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.682772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.684285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.684537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.684553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.687176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.687542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.687898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.688274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.688676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.690240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.691655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.693149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.694733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.695175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.695191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.696954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.697325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.697686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.698045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.698331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.699606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.701118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.702626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.703430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.703680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.703696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.705531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.705892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.706253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.707004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.707286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.708940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.710468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.711908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.713085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.713380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.713396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.715406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.715770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.716129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.717718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.717969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.719487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.720988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.721688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.722957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.723211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.723230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.725432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.725795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.726978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.728241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.728492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.730023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.254 [2024-07-25 13:34:18.731056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.732625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.734055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.734309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.734325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.736746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.737348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.738628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.740135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.740390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.741989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.743035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.744303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.745823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.746075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.746090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.748513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.750144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.751693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.753343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.753595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.754314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.755583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.757091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.758600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.758851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.758867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.762333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.763603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.765111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.766620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.767075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.768273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.769075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.770600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.772055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.772313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.772329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.775810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.777078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.778567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.780058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.780422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.782056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.783620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.785255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.786871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.787212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.787229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.790668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.792190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.793693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.794698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.794951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.796208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.797704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.517 [2024-07-25 13:34:18.799208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.799821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.800268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.800285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.803819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.805495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.807054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.808111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.808440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.809967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.811473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.812601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.812961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.813361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.813378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.816757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.818270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.818971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.820295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.820548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.822062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.823659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.824026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.824387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.824766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.824782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.827952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.829176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.830602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.831883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.832137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.833665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.834427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.834790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.835158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.835626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.835642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.838714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.839544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.840803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.842313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.842566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.843963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.844325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.844681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.845039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.845467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.845484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.847762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.849358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.851024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.852621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.852872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.853290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.853650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.854008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.854373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.854703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.854720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.857552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.858954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.859834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.860209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.860616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.860978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.861341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.862867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.864531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.864783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.864798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.867868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.869406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.869773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.870134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.870555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.870927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.871304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.871673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.872035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.872427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.872444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.518 [2024-07-25 13:34:18.874894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.875263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.875622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.875979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.876325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.876693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.877053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.877413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.877785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.878195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.878214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.880771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.881145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.881508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.881870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.882323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.882689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.883045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.883410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.883788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.884199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.884216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.886777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.887150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.887509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.887864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.888271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.888637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.888999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.889364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.889723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.890065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.890080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.892505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.892868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.893229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.893594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.893964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.894339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.894702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.895072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.895445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.895780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.895796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.898358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.898726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.899091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.899457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.899846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.900224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.900582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.900947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.901317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.901746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.901764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.904224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.904586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.904945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.905308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.905670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.906039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.906404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.906760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.907115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.907529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.907546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.910008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.910377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.910422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.910784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.911186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.911554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.911911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.912271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.912630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.913086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.913101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.915577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.915957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.916339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.916384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.916808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.917176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.519 [2024-07-25 13:34:18.917533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.917889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.918261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.918621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.918636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.920800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.920843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.920881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.920918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.921274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.921329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.921368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.921422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.921460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.921896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.921912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.924023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.924081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.924132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.924177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.924502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.924557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.924595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.924632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.924670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.925081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.925097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.927220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.927262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.927314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.927352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.927795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.927841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.927879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.927917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.927955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.928310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.928326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.930571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.930612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.930649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.930686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.931091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.931135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.931179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.931217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.931255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.931654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.931673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.933795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.933837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.933875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.933913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.934284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.934334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.934372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.934410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.934447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.934867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.934883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.937000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.937041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.937079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.937118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.937513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.937569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.937608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.937649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.937687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.938096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.938112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.940384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.940425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.940463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.940500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.940907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.940952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.940990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.941032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.941070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.941400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.941416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.943546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.943588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.520 [2024-07-25 13:34:18.943630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.943669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.944064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.944114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.944166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.944211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.944276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.944729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.944744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.947114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.947159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.947198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.947236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.947579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.947638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.947677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.947720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.947766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.948098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.948114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.950673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.950726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.950765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.950806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.951056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.951111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.951168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.951218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.951256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.951612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.951627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.953671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.953714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.953753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.953791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.954202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.954247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.954287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.954337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.954375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.954728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.954745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.956919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.956960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.957004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.957041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.957358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.957406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.957444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.957481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.957518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.957806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.957821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.959361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.959409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.959447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.959493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.959740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.959794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.959836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.959873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.959910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.960159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.960175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.962143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.962183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.962222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.962262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.962673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.962719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.962758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.962799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.962836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.963079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.963094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.964645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.964686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.964723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.964760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.965053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.965104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.965146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.965184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.965221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.521 [2024-07-25 13:34:18.965465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.965480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.967558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.967615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.967658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.967697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.968143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.968190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.968228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.968267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.968305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.968616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.968631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.970148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.970190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.970228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.970269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.970518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.970571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.970611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.522 [2024-07-25 13:34:18.970662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.970700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.970945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.970960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.972779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.972821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.972859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.972897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.973276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.973332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.973370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.973409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.973446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.973856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.973872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.975334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.975374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.975414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.975458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.975845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.975893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.975930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.975968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.976005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.976322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.976337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.977964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.978006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.978048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.978086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.978498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.978553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.978603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.978643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.978681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.979106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.979122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.980711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.980752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.980790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.980827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.981120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.981180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.981223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.523 [2024-07-25 13:34:18.981265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.981304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.981549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.981564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.983067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.983108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.983149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.983187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.983599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.983644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.983683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.983722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.983760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.984105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.984120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.985930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.985970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.986008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.986045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.986294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.986345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.986383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.986423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.986469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.986840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.986855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.988268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.988309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.988351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.988388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.988802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.988857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.988896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.988935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.988973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.989373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.989389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.991445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.991503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.991541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.991579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.991824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.991878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.991916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.991953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.991990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.992307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.992323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.993736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.993775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.993813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.993850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.994212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.994269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.994308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.994346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.994384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.994813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.994830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.996738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.996783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.996821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.996858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.997101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.997156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.997194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.997232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.997270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.997513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.997528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.999044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.999093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.999142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.999180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.999425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.999474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.999514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.999552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.999591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.999982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.524 [2024-07-25 13:34:18.999997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.002100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.002145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.002190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.002230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.002477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.002525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.002571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.002613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.002650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.002896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.002915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.004450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.004490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.004527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.004563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.004806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.004857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.004894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.004931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.004969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.005337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.005352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.007766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.007807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.009039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.009081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.009331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.009384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.009422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.009459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.009496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.009742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.009757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.011317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.011359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.011396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.013050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.013417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.013472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.013510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.013548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.013590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.013973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.013988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.017324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.018840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.019698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.021253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.021503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.023022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.024524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.024972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.025334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.785 [2024-07-25 13:34:19.025717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.025732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.029204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.030687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.031823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.033080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.033333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.034857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.035965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.036326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.036684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.037043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.037059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.040256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.040964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.042308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.043808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.044057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.045655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.046021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.046383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.046741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.047165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.047181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.049988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.051335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.052605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.054113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.054367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.055244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.055612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.055969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.056336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.056750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.056766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.058897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.060167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.061677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.063187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.063436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.063807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.064170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.064528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.064886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.065179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.065195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.068061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.069319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.070813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.072278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.072710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.073085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.073451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.073813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.074388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.074638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.074653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.077458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.078967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.080469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.081509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.081890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.082261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.082621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.082978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.084465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.084761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.084776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.087784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.089404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.091017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.091381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.091797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.092170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.092529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.093422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.094685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.094934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.094949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.097939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.099447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.100125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.100492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.786 [2024-07-25 13:34:19.100913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.101296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.101657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.103113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.104711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.104959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.104974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.108045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.109321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.109682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.110039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.110431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.110801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.112056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.113297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.114799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.115047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.115062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.118106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.118487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.118847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.119209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.119636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.120291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.121539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.123039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.124539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.124794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.124810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.127272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.127650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.128010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.128377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.128783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.130355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.131779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.133297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.134932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.135297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.135313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.137098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.137467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.137828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.138191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.138462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.139736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.141234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.142734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.143484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.143733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.143748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.145629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.145993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.146357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.147157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.147451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.149055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.150558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.151942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.153162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.153449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.153464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.155503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.155869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.156234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.157766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.158015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.159523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.161021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.161730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.163000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.163256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.163271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.165509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.165877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.167065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.168325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.168576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.170105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.171119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.172685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.174108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.174366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.787 [2024-07-25 13:34:19.174382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.176723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.177196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.178509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.179995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.180256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.181848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.182748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.184028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.185516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.185766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.185781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.188171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.189507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.190756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.192255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.192507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.193422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.195023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.196599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.198272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.198524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.198539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.201269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.202538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.204043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.205544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.205797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.206842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.208111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.209628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.211144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.211456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.211472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.214883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.216358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.217842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.218360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.218611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.220131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.221641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.222004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.222370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.222730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.222747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.226166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.227604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.228448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.229712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.229963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.231494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.232828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.233195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.233553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.233897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.233913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.236325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.236690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.237049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.237414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.237749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.238117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.238484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.238844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.239207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.239540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.239559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.241991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.242363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.242729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.243091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.243503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.243871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.244234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.244605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.244969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.245392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.245408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.247906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.788 [2024-07-25 13:34:19.248297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.248659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.249021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.249428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.249793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.250160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.250523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.250887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.251352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.251368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.253854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.254229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.254590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.254956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.255361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.255738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.256102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.256468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.256831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.257200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.257217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.259734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.260105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.260479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.260841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.261210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.261580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.261942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.262317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.262683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.263100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.263118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.265629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.265993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.266362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.266722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.267112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.267487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.267852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.268214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.268573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.268979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:08.789 [2024-07-25 13:34:19.268995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.271511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.271878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.272248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.272610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.272963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.274233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.274593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.275353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.276387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.276799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.276819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.279038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.279756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.280839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.281204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.281582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.281957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.282669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.283756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.284115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.284460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.284475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.287117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.288243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.288606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.289530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.289789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.290166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.290526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.290889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.291774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.292030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.050 [2024-07-25 13:34:19.292046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.294390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.294759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.294804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.295912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.296208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.296577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.297451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.298382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.298746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.299105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.299122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.301949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.302905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.303270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.303314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.303635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.304003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.305071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.305806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.306173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.306454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.306469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.308526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.308568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.308606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.308661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.308908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.308954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.308991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.309042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.309090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.309512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.309531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.311648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.311692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.311743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.311795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.312181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.312237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.312287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.312339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.312379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.312717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.312733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.314575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.314617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.314654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.314692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.315100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.315152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.315191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.315229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.315267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.315610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.315625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.317692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.317734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.317776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.317812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.318058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.318111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.318167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.318205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.318243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.318644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.318660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.320551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.320593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.320630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.320668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.321063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.321112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.321156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.321194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.321233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.321484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.321500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.323647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.323690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.323728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.323778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.324027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.051 [2024-07-25 13:34:19.324072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.324119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.324176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.324215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.324646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.324663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.326732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.326775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.326826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.326892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.327263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.327333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.327390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.327439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.327478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.327766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.327781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.329694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.329736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.329777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.329815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.330214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.330261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.330301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.330340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.330395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.330648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.330665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.332420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.332473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.332511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.332549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.332957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.333006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.333045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.333083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.333120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.333435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.333452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.335546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.335590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.335629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.335666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.336046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.336100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.336147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.336185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.336224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.336611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.336627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.338823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.338876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.338915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.338952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.339366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.339412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.339453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.339491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.339529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.339864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.339879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.341717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.341759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.341799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.341836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.342079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.342129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.342172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.342209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.342246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.342668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.342683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.344157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.344205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.344249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.344288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.344613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.344671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.344710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.344752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.344791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.345187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.052 [2024-07-25 13:34:19.345204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.347113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.347160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.347197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.347240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.347487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.347532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.347579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.347620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.347657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.347900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.347915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.349468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.349509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.349546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.349584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.349964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.350032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.350083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.350122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.350165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.350588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.350608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.352612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.352653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.352694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.352731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.352975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.353026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.353064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.353101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.353143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.353389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.353404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.357366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.357414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.357452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.357489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.357892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.357941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.357980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.358018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.358055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.358411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.358428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.361893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.361941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.361979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.362016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.362303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.362357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.362395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.362432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.362476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.362722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.362737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.366700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.366748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.366795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.366832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.367077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.367129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.367172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.367214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.367251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.367495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.367510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.371846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.371892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.371933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.371970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.372363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.372429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.372480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.372519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.372556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.372959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.372975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.376343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.376391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.376428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.376465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.376762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.376822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.053 [2024-07-25 13:34:19.376860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.376903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.376940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.377189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.377204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.381008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.381054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.381091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.381128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.381538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.381584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.381624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.381664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.381702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.381948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.381962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.385425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.385484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.385525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.385562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.385804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.385857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.385894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.385932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.385969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.386284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.386300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.388889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.388935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.389275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.389319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.389357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.389603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.393554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.393601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.393639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.393677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.394104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.394150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.394189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.394228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.394639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.398068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.398115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.398158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.398196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.398510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.398550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.398587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.398624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.398869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.402716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.402765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.402804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.402844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.403118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.403170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.403207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.403250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.403495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.407916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.407962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.408000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.408037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.408417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.408457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.408511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.408561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.409004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.444374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.444440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.445944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.453238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.453582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.453630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.454652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.454703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.455945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.456198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.456214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.459219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.054 [2024-07-25 13:34:19.460708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.461207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.461563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.462290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.462833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.464095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.465603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.465851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.465867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.469227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.470769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.471127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.471498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.472257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.473441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.474691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.476195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.476444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.476459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.479481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.479932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.480295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.480653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.481687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.482954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.484460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.485959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.486213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.486229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.488702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.489066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.489426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.489785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.491803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.493272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.494819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.496448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.496824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.496839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.498626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.498993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.499360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.499717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.501217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.502720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.504226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.504934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.505186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.505201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.507124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.507493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.507853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.509057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.510861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.512366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.513433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.514965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.515282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.515297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.517431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.517794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.518519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.519784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.521699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.523229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.524313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.055 [2024-07-25 13:34:19.525566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.056 [2024-07-25 13:34:19.525812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.056 [2024-07-25 13:34:19.525827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.056 [2024-07-25 13:34:19.528112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.056 [2024-07-25 13:34:19.528478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.056 [2024-07-25 13:34:19.530019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.056 [2024-07-25 13:34:19.531678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.056 [2024-07-25 13:34:19.533427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.056 [2024-07-25 13:34:19.534131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.056 [2024-07-25 13:34:19.535400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.536787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.537035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.537051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.539422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.540614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.541875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.543385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.544730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.546285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.547677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.549186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.549435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.549450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.552430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.553684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.555189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.556697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.558008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.559281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.560786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.562286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.562640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.562656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.566672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.568322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.569872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.571306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.572858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.574369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.575873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.576927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.577360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.577376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.580761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.582269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.583831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.584682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.586605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.588120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.589559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.589916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.590325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.590342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.593666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.595181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.596055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.597639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.599391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.600888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.601419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.601778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.602159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.602174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.605546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.606380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.607645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.609062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.610161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.611093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.611454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.612554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.612856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.612872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.616073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.617250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.618702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.619986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.621751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.622578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.318 [2024-07-25 13:34:19.622935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.623294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.623727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.623744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.626794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.627171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.628602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.630202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.631949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.632323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.632681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.633035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.633486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.633503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.635963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.636332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.636691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.637052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.637812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.638175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.638531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.638893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.639239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.639255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.641841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.642212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.642573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.642929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.643714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.644074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.644445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.644806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.645219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.645236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.647872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.648237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.648594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.648955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.649677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.650038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.650397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.650754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.651128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.651148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.653755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.654117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.654495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.654859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.655634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.655995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.656363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.656725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.657128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.657149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.659722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.660091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.660455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.660812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.661559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.661921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.662285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.662643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.663009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.663025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.665500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.665859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.666220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.666580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.667290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.667648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.668004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.668364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.668706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.668722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.671224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.671590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.671953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.672317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.673074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.673443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.673805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.319 [2024-07-25 13:34:19.674172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.674613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.674629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.677118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.677486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.677843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.678204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.678958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.679325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.679684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.680039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.680471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.680498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.682984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.683353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.683715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.684079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.684839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.685201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.685559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.685920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.686282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.686298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.688792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.688842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.689206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.689566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.690329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.690688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.691049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.691417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.691814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.691830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.694310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.694673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.695029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.695072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.695844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.695887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.696257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.696309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.696683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.696699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.699837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.699888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.700250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.700289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.701045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.701100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.701470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.701516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.701878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.701894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.704335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.704385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.704740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.704780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.705542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.705584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.705942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.705981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.706289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.706305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.708478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.708531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.708885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.708926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.709689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.709741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.710104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.710153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.710567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.710584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.713062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.320 [2024-07-25 13:34:19.713110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.713471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.713507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.714236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.714279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.715360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.715402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.715684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.715699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.717189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.717230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.717276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.717314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.719076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.719121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.720647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.720699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.721036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.721051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.723272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.723313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.723350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.723387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.723705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.723745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.723782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.723819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.724064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.724079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.725588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.725628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.725665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.725715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.725990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.726036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.726078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.726115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.726363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.726380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.728592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.728634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.728672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.728711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.728986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.729025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.729065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.729116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.729370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.729385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.730914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.730958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.730995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.731033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.731318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.731358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.731395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.731432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.731673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.731688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.733748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.733790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.733828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.733870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.734300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.734344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.734382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.734420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.734666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.734682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.736197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.736238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.736275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.736311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.736623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.736663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.736700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.736737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.321 [2024-07-25 13:34:19.736988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.737003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.738990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.739046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.739085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.739122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.739572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.739614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.739652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.739689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.740032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.740048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.741448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.741489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.741527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.741565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.741841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.741880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.741917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.741956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.742224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.742239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.743940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.743981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.744021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.744062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.744458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.744498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.744536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.744573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.744966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.744982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.746485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.746526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.746563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.746600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.746930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.746970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.747011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.747050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.747299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.747315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.748821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.748862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.748899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.748936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.749367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.749409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.749447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.749486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.749859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.749875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.751582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.751623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.751661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.751698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.751982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.752022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.752059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.752096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.752449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.752468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.753885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.753933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.753971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.754011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.754428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.754468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.754507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.754545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.754952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.754970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.756878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.756918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.756955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.756992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.757279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.757320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.757364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.757406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.757651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.322 [2024-07-25 13:34:19.757666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.759210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.759250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.759287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.759324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.759695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.759736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.759774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.759825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.760266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.760287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.762303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.762350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.762399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.762439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.762718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.762766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.762803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.762840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.763082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.763097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.764644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.764685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.764722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.764761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.765045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.765084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.765121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.765163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.765517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.765532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.767871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.767912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.767948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.767985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.768307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.768347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.768384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.768421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.768663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.768678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.770146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.770187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.770224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.770260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.770538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.770577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.770615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.770658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.770902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.770917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.773190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.773232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.773271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.773309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.773622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.773661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.773698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.773735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.774042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.774057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.775557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.775605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.775644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.323 [2024-07-25 13:34:19.775962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.324 [2024-07-25 13:34:19.776015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.324 [2024-07-25 13:34:19.776055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.324 [2024-07-25 13:34:19.776303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.820256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.820323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.821806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.825090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.825156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.825490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.825845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.826205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.826613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.826661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.826707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.828244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.828292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.828340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.829983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.830236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.830252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.833401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.834932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.835296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.835653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.836036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.836404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.837590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.838845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.840341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.840592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.840606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.843626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.844340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.844698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.845054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.845498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.845945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.847278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.848763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.850191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.850441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.850456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.853439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.854319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.854689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.855044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.855498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.855861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.857408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.859078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.860650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.860900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.860916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.864071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.864437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.585 [2024-07-25 13:34:19.864793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.865153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.865557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.866651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.867901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.869405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.870910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.871294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.871310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.873430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.873793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.874153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.874509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.874879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.876254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.877757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.879261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.880575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.880899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.880914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.882686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.883046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.883407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.883762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.884011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.885273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.886769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.888257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.888952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.889208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.889225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.891049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.891413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.891770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.892775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.893061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.894593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.896097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.897292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.898717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.899006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.899021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.900992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.901360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.901720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.903153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.903403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.904915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.906419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.907168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.908429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.908681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.908696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.910806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.911176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.912389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.913655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.913905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.915431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.916419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.918051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.919554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.919805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.919820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.922091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.922634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.923908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.925400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.925650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.927298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.928225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.929478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.930954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.931208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.931228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.933560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.934820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.936089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.937590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.937841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.938815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.586 [2024-07-25 13:34:19.940501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.942011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.943611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.943863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.943879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.946413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.948017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.949539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.950973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.951325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.952842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.954333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.955579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.955936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.956343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.956359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.959647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.961161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.961940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.963429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.963682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.965196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.966701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.967068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.967432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.967791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.967806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.971153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.972543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.973425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.974691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.974943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.976473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.977751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.978109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.978468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.978837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.978852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.981245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.981606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.981962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.982331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.982694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.983062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.983424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.983779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.984141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.984549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.984564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.986995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.987365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.987729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.988090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.988470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.988838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.989201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.989567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.989926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.990331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.990348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.992740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.993114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.993474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.993830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.994148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.994515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.994875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.995236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.995595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.587 [2024-07-25 13:34:19.996012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:19.996029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:19.998433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:19.998793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:19.999157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:19.999520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:19.999937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.000306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.000664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.001022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.001391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.001727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.001743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.004644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.005011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.005375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.005735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.006160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.006524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.006896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.007265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.007625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.007987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.008003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.010519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.010878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.011241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.011602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.011961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.012337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.012695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.013051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.013410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.013724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.013740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.016195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.016562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.016926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.017291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.017707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.018069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.018429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.018788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.019161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.019604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.019619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.022081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.022452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.022810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.023170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.023586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.023952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.024325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.024684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.025039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.025456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.025472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.027830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.028193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.028553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.028919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.029280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.029651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.030008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.030370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.030730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.031117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.031133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.033678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.034048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.034417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.034460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.034848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.035218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.035575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.035935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.036300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.036684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.036700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.039159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.039524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.039880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.040245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.040652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.041019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.588 [2024-07-25 13:34:20.041387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.041434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.041803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.042223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.042241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.044715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.044762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.045117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.045162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.045589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.045955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.046000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.046370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.046737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.047159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.047177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.049662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.049718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.051349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.051391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.051770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.051815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.052178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.052533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.052574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.052977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.052993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.055300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.055349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.055704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.055744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.056098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.056471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.056516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.056885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.056924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.057335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.057352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.061217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.061265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.061620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.062287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.062545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.062914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.062957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.063780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.063824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.064117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.064132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.066832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.066879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.068385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.069874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.070215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.589 [2024-07-25 13:34:20.070588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.850 [2024-07-25 13:34:20.070944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.850 [2024-07-25 13:34:20.070999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.850 [2024-07-25 13:34:20.071368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.850 [2024-07-25 13:34:20.071790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.850 [2024-07-25 13:34:20.071805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.850 [2024-07-25 13:34:20.073885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.850 [2024-07-25 13:34:20.073931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.850 [2024-07-25 13:34:20.075188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.075229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.075478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.077008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.077051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.078253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.079563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.079933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.079949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.083022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.083068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.083106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.083147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.083428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.083480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.084990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.086501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.086544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.086908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.086923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.088335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.088387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.088428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.088466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.088858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.088903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.088941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.088980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.089018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.089421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.089437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.091299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.091340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.091377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.091414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.091661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.091711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.091749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.091793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.091833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.092080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.092095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.093636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.093677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.093713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.093750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.094032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.094084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.094122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.094176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.094217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.094463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.094483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.096628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.096669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.096708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.096746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.097116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.097167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.097206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.097243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.097280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.097568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.097583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.099071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.099112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.099157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.099195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.099486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.099537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.099575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.099612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.099658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.099907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.099922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.101894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.101935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.101973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.102011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.102405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.102455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.102494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.102532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.102574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.102831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.102846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.104340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.104381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.104423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.104460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.104704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.851 [2024-07-25 13:34:20.104756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.104795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.104837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.104874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.105117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.105132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.106670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.106711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.106748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.106786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.107182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.107228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.107268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.107306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.107345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.107595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.107610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.109509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.109549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.109590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.109627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.109875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.109929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.109967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.110026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.110066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.110312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.110328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.111844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.111885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.111922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.111959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.112257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.112311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.112350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.112401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.112440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.112883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.112899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.114961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.115002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.115045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.115082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.115332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.115383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.115424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.115461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.115499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.115744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.115759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.117308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.117349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.117385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.117427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.117674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.117726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.117763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.117800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.117837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.118207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.118223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.120033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.120074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.120114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.120157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.120576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.120625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.120664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.120703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.120741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.121024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.121039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.122497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.122539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.122576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.122619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.122868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.122916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.122961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.123000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.123041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.123290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.123305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.125131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.125175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.125214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.125254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.125621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.125666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.125703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.125742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.125780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.852 [2024-07-25 13:34:20.126190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.126207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.127596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.127636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.127673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.127710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.128067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.128119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.128161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.128198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.128236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.128517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.128533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.130003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.130043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.130084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.130121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.130532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.130583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.130622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.130660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.130698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.131113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.131132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.132929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.132976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.133014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.133055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.133305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.133351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.133402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.133440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.133477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.133722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.133737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.135286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.135327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.135364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.135401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.135648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.135699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.135737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.135775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.135812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.136173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.136190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.138501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.138541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.138579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.138616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.138899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.138949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.138991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.139029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.139066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.139313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.139329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.140791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.140832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.140872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.140910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.141161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.141212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.141250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.141288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.141334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.141580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.141595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.143556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.143596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.143633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.143670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.143978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.144032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.144071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.144108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.144151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.144560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.144576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.145990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.146030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.146067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.146105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.146493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.146546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.146584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.146621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.146658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.146938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.146954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.148513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.148554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.853 [2024-07-25 13:34:20.148592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.148948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.149354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.149399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.149439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.149477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.149518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.149927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.149943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.151357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.151397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.151434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.151478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.151813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.151860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.151898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.153137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.153181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.153430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.153445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.155006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.155352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.155419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.156897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.157327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.157382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.157742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.157792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.157831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.158080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.158095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.159620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.160857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.160922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.162406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.162655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.164142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.164185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.164247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.164580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.164983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.164999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.166898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.168377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.168421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.169923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.170377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.170456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.171831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.171898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.173367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.173617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.173636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.175384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.176340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.176382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.176420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.176809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.176863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.177225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.177265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.178892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.179171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.179187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.183650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.185335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.185383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.185421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.185791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.185837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.185875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.186238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.186278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.186696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.186712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.188375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.189881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.189923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.190613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.190862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.190917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.192589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.192633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.192674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.192922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.192938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.196364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.196732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.198073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.199335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.199585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.201112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.854 [2024-07-25 13:34:20.201168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.201206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.201912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.202162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.202179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.203994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.204355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.204712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.205710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.205993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.207497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.208990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.210251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.211597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.211893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.211908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.218086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.218481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.218841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.220308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.220557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.222085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.223582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.224332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.225601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.225851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.225866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.227956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.228321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.229526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.230783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.231031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.232559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.233575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.235175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.236623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.236872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.236887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.240729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.241316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.242565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.244063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.244315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.246000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.246951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.248197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.249705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.249954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.249969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.252307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.253733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.255003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.256486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.256735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.257580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.259116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.260764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.262371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.262619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.262634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.266977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.268243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.269755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.271275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.271571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.272973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.274255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.275773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.277282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.277640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.277656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.281298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.282807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.284316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.285641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.285939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.855 [2024-07-25 13:34:20.287197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.288705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.290211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.291127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.291379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.291394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.295422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.296930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.298438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.299154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.299405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.300995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.302674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.304227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.304593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.305001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.305018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.308361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.309850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.310809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.312484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.312746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.314258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.315766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.316370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.317727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.318160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.318177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.322634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.323570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.324826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.326232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.326491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.326863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.327225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.327580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.327935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.328193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.328209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.331248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.332713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:09.856 [2024-07-25 13:34:20.334284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.335955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.336318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.337707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.338066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.338854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.339864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.340292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.340312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.345553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.347073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.348575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.349229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.349672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.350036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.350400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.350935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.352194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.352445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.352463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.355509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.357192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.358723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.359330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.359580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.359950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.360412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.117 [2024-07-25 13:34:20.361739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.362099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.362463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.362482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.365713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.367028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.367515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.367873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.368121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.368751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.369109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.369472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.369836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.370291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.370307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.372854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.374427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.374785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.375349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.375597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.375967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.376333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.376698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.377061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.377472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.377488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.380403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.381188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.382202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.382559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.382935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.383312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.383670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.384026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.384385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.384804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.384820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.386955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.388154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.388755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.389111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.389442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.389810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.390176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.390532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.390891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.391316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.391332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.394167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.394538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.394899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.395267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.395700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.396065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.396425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.396787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.397158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.397541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.397557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.399667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.400031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.400413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.400778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.401193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.401558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.401915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.402278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.402639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.402953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.402968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.407308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.407676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.408033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.408394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.408827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.409198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.409560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.410933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.411364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.411761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.118 [2024-07-25 13:34:20.411777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.414046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.414420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.414779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.415136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.415548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.415932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.416300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.417861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.418228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.418615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.418635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.423076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.423444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.423806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.424172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.424556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.425789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.426151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.427062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.427945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.428347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.428364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.430794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.431158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.431517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.431881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.432136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.433027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.433388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.434607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.435192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.435593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.435609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.438793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.439168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.440796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.440847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.441298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.441664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.443298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.443658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.444022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.444372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.444389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.446976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.447350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.448085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.449132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.449548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.450224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.451336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.451380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.451734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.452051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.452068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.455219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.455272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.456691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.456732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.457160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.457528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.457575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.459250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.459611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.460027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.460042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.461994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.462039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.462400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.462439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.462873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.462927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.463293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.463655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.463700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.463986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.464001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.468042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.468104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.468912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.468955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.469258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.469628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.469669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.470721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.470762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.471123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.471151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.119 [2024-07-25 13:34:20.473725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.473772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.474132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.475807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.476248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.476620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.476666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.478098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.478154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.478402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.478417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.484193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.484245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.485510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.486029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.486447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.487637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.488251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.488295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.488650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.488898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.488913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.491964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.492010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.493560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.493603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.493849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.495524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.495572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.496571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.497371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.497769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.497785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.503454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.503505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.503543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.503580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.503958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.504013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.505360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.506870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.506912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.507162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.507177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.508988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.509030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.509068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.509106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.509363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.509411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.509449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.509486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.509530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.509972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.509991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.513428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.513483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.513521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.513557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.513834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.513887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.513924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.513961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.513998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.514243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.514259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.515919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.515961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.515999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.516038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.516419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.516469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.516507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.516545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.516582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.516839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.516855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.520245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.520293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.520331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.520375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.520620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.520664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.520708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.520746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.520787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.521033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.521049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.120 [2024-07-25 13:34:20.522623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.522664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.522704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.522741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.523163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.523209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.523249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.523287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.523325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.523578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.523593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.527113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.527164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.527202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.527239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.527621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.527678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.527716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.527757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.527794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.528092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.528107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.529605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.529646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.529683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.529720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.530158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.530210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.530251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.530289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.530327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.530738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.530753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.535642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.535696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.535737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.535774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.536020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.536071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.536111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.536153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.536192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.536437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.536452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.537946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.537999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.538044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.538082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.538333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.538385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.538424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.538461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.538499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.538934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.538952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.542267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.542313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.542351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.542388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.542629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.542684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.542721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.542759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.542796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.543125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.543143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.544541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.544581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.544624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.544664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.545064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.545112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.545155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.545192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.545229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.545549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.545564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.547869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.547916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.547959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.547996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.548245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.548298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.548335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.548373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.548418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.548662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.548677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.550273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.550317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.550358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.121 [2024-07-25 13:34:20.550395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.550650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.550700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.550739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.550777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.550815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.551060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.551075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.553872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.553926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.553964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.554009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.554258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.554306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.554355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.554396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.554433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.554674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.554692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.556233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.556273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.556310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.556347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.556592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.556642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.556680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.556717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.556754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.557176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.557192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.562544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.562590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.562627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.562664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.562950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.563003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.563040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.563078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.563114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.563360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.563375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.564859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.564900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.564940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.564976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.565224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.565275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.565312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.565349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.565418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.565661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.565676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.569126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.569175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.569214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.569252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.569531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.569583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.569621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.569658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.569695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.569982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.569998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.571487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.571535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.571585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.571626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.571872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.571931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.571972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.572009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.572046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.572294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.122 [2024-07-25 13:34:20.572309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.575539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.575586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.575625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.575663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.576057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.576106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.576150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.576191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.576229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.576472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.576487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.577977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.578017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.578058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.578095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.578406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.578457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.578495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.578533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.578570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.578815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.578831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.582535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.582580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.582626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.582664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.583091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.583143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.583183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.583222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.583261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.583593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.583609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.584992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.585033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.585071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.585113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.585365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.585413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.585450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.585491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.585535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.585785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.585800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.590651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.590696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.590734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.591740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.592153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.592203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.592242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.592280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.592318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.592594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.592609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.594100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.594145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.594188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.594225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.594474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.594524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.594563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.596141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.596183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.596434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.596449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.599968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.600346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.600388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.601781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.602066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.123 [2024-07-25 13:34:20.602119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.603608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.603651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.603688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.603936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.603951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.605455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.606977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.607026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.607886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.608151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.608521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.608562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.608606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.609698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.610018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.610034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.613439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.615065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.615114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.616771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.617030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.617082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.618668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.618717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.619637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.619928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.619944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.622280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.623774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.623818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.623855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.624106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.624163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.625175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.625215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.626804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.627055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.420 [2024-07-25 13:34:20.627070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.632009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.633349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.633391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.633429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.633828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.633873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.633912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.635036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.635079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.635385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.635401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.639953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.641463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.641507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.642162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.642414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.642468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.642829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.642873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.642912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.643231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.643246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.647081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.648181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.649704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.651066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.651322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.652851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.652895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.652932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.653535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.653786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.653802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.657733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.659255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.660566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.661863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.662207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.663734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.665240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.666141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.667729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.668136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.668156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.671926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.673414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.674094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.675358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.675613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.677283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.678805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.679745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.680605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.681019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.681035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.686475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.687497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.689083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.690511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.690762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.692289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.692900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.694376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.694732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.695103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.695119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.701157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.702066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.703334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.704826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.705077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.706403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.707588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.708196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.708552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.708811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.708826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.713072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.714639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.716321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.717866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.718118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.718687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.719920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.720279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.721212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.721495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.421 [2024-07-25 13:34:20.721511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.726079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.727349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.728854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.730367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.730729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.732263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.732627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.732984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.734456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.734904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.734919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.739864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.741385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.742894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.743950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.744213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.745026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.745387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.746683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.747185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.747589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.747609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.753203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.754700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.756194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.756760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.757010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.757387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.757800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.759185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.759541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.759911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.759926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.764885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.765930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.766691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.767050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.767320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.768184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.768543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.769952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.771238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.771490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.771504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.776610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.777937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.778302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.779105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.779366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.779738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.780550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.781811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.783323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.783573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.783588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.789159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.789537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.789898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.791482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.791936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.792307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.793844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.795510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.797067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.797325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.797341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.802551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.802924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.803542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.804723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.805137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.805509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.805874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.806990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.807675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.808081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.808098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.812302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.812671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.814027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.814470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.814879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.815254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.815621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.816871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.817411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.817819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.817837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.422 [2024-07-25 13:34:20.822129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.822501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.824009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.824379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.824792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.825165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.825534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.827172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.827538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.827946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.827963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.832399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.832766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.834348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.834706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.835106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.835481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.835844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.837359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.837719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.838107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.838123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.842249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.842623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.844050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.844418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.844792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.845166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.845646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.846930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.847293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.847659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.847675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.851546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.852158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.853350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.853709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.854109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.854492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.855150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.856311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.856672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.857006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.857021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.860727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.861416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.862527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.862885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.863250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.863630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.864374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.865424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.865785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.866102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.866118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.869744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.870612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.871534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.871894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.872221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.872589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.873460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.874382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.874739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.875040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.875055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.878511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.879221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.880303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.880663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.881048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.881427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.881935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.883230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.883588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.883942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.883957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.887925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.888328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.889717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.890075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.890453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.890823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.891201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.892620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.892981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.893370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.893387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.423 [2024-07-25 13:34:20.897459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.424 [2024-07-25 13:34:20.897828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.424 [2024-07-25 13:34:20.899369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.424 [2024-07-25 13:34:20.899728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.424 [2024-07-25 13:34:20.900120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.424 [2024-07-25 13:34:20.900495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.424 [2024-07-25 13:34:20.900857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.424 [2024-07-25 13:34:20.902430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.424 [2024-07-25 13:34:20.902787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.424 [2024-07-25 13:34:20.903201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.424 [2024-07-25 13:34:20.903217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.907588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.907954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.909605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.909972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.910384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.910766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.911129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.912783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.913152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.913572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.913588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.918198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.918564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.920120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.920177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.920631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.920998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.921370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.921738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.923308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.923768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.923785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.927894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.928273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.928635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.930129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.930558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.930928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.931294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.931341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.931824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.932076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.932092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.935239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.935293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.936809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.936859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.937199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.938666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.938709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.939064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.939794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.940050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.940065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.943527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.943580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.944653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.944694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.945100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.945152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.945511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.945872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.945917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.946181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.946196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.949976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.950030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.951446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.951487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.951901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.952278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.952320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.953932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.953982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.954452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.954471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.960013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.960066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.960429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.961269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.961600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.963180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.963226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.964727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.964769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.965028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.965053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.969902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.969959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.970326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.971756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.972175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.972571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.973966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.974011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.685 [2024-07-25 13:34:20.975507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.975758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.975774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.981403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.981455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.981869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.981910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.982317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.983732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.983775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.984135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.984582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.984834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.984849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.990092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.990149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.990187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.990225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.990516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.990569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.991841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.992372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.992413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.992813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.992829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.997824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.997870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.997907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.997967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.998221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.998266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.998312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.998351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.998389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.998704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:20.998719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.002556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.002603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.002643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.002681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.003082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.003147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.003191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.003228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.003265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.003515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.003530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.006942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.006989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.007028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.007066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.007317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.007365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.007403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.007447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.007490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.007741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.007756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.012480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.012526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.012564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.012601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.012900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.012951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.012989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.013028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.013066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.013474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.013490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.018024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.018072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.018116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.018159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.018408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.018461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.018499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.018536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.018573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.018819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.018834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.021994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.022041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.022080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.022118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.022525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.022581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.022626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.022664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.022701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.022992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.023008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.686 [2024-07-25 13:34:21.026992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.027038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.027075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.027113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.027363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.027416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.027455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.027492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.027529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.027870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.027886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.032912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.032958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.032996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.033033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.033318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.033370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.033407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.033444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.033481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.033727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.033742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.038200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.038246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.038297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.038340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.038651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.038698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.038736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.038773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.038810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.039175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.039191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.041486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.041532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.041570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.041607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.041852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.041907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.041945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.041988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.042034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.042287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.042302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.046251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.046298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.046336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.046373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.046783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.046834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.046874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.046912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.046950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.047352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.047368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.052003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.052058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.052096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.052133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.052393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.052445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.052483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.052522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.052560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.052811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.052826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.056379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.056427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.056469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.056508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.056827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.056875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.056913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.056950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.056987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.057333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.057349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.060801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.060851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.060889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.060927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.061236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.061294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.061332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.061370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.061407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.061661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.061677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.065402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.065448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.065493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.065530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.687 [2024-07-25 13:34:21.065954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.066002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.066041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.066079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.066117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.066467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.066482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.070468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.070514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.070553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.070589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.070837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.070890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.070928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.070965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.071008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.071261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.071277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.074802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.074849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.074891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.074928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.075183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.075232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.075269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.075310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.075354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.075604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.075620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.079908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.079953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.079991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.080028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.080331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.080386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.080426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.080466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.080509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.080756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.080772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.083261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.083313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.083350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.083388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.083638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.083690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.083728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.083765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.083803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.084051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.084066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.088102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.088154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.088193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.088231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.088629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.088675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.088714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.088753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.088791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.089169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.089184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.092627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.092677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.092720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.092757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.093080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.093133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.093179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.093217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.093254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.093500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.093515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.097640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.097690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.097728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.097765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.098030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.098086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.098127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.098169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.098207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.098455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.098470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.102765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.102812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.102854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.102891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.103369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.103423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.103462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.688 [2024-07-25 13:34:21.103499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.103536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.103942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.103958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.107259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.107306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.107344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.108036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.108294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.108349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.108388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.108430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.108467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.108717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.108733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.112687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.112736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.112776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.112816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.113065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.113117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.113162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.114796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.114841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.115092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.115108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.119627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.119996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.120038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.120398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.120860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.120910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.121317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.121360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.121398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.121675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.121690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.125866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.127371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.127414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.128046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.128488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.128853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.128906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.128948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.129308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.129683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.129699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.133923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.135418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.135462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.136958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.137322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.137378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.137747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.137787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.138151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.138584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.138600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.142314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.143584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.143628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.143666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.143919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.143971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.145477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.145519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.146127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.146574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.146592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.150222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.151889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.151933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.151971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.152299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.689 [2024-07-25 13:34:21.152349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.152387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.153651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.153693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.153941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.153957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.158087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.159412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.159457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.160960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.161218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.161272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.162635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.162677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.162716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.162963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.162977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.167034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.167403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.167968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.169216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.690 [2024-07-25 13:34:21.169465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.171028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.171080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.171121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.172429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.172740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.172755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.177263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.177627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.179265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.180750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.181002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.182531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.183223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.184490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.185992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.186248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.186263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.191471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.192950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.194456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.195353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.195604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.196875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.198389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.199887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.200348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.200777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.200794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.205825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.206790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.208049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.209548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.209799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.211068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.211432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.211788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.212147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.212553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.212572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.217662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.218156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.218516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.218872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.219300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.219933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.221169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.222675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.224185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.224435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.224452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.229305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.229679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.231266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.231632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.232027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.233572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.234958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.236455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.238021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.238432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.238448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.243278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.243645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.244753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.246004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.246260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.247790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.248891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.250401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.251763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.252013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.252029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.257405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.258964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.260476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.261812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.262178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.263486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.950 [2024-07-25 13:34:21.265029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.266542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.267782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.268118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.268133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.271320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.271689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.272050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.272419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.272772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.273165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.273521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.273878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.274245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.274611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.274627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.277885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.278261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.278619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.278977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.279333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.279702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.280061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.280424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.280783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.281205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.281221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.284493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.284864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.285243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.285600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.286063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.286434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.286801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.287173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.287533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.287950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.287966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.291245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.291613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.291976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.292340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.292731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.293095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.293460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.293817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.294183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.294613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.294628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.297882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.298252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.298610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.298975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.299377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.299748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.300104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.300468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.300827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.301189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.301206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.303715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.304090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.304464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.304823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.305182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.305551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.305912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.306280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.306642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.307032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.307049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.309466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.309827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.310189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.310546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.310894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.311270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.311630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.311985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.312346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.312758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.312775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.315209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.315569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.315935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.316304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.316769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.317134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.317506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.317864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.318236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.951 [2024-07-25 13:34:21.318673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.318689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.321451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.321825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.322194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.322550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.322962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.323335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.323709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.324077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.324439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.324793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.324809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.327252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.327615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.327972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.328336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.328672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.329040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.329403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.329759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.330118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.330474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.330491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.332985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.333354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.333717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.334088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.334496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.334861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.335224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.336689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.337053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.337311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.337327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.339796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.340169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.340537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.340897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.341269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.341636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.341996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.342367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.342726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.343124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.343147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.345553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.345913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.346274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.346631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.346998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.347373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.347733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.348089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.348448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.348862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.348881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.351097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.352605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.354231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.355744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.355995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.356374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.356733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.357089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.357458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.357752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.357767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.360525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.361790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.363300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.364803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.365216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.365595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.365951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.366311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.367027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.367312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.367329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.370061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.371617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.373124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.374091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.374533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.374900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.375262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.375619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.377191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.377443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.377458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.380493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.382174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.952 [2024-07-25 13:34:21.383741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.383786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.384147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.384514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.384876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.385236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.386566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.386860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.386876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.389592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.391102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.392612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.393026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.393485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.393851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.394212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.394255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.395171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.395460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.395476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.398206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.398253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.399756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.399799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.400050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.400986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.401039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.401400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.401761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.402195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.402211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.405243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.405297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.406257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.406303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.406594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.406647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.408167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.409667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.409709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.410045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.410060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.414087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.414136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.415635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.415676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.415923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.417515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.417557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.418761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.418803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.419080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.419095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.421081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.421127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.421490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.421989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.422245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.423683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.423727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.425365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.425408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.425656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.425671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.428632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.428683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.429040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.429400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.429816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.430182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.431690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.431738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.433264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.433514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:10.953 [2024-07-25 13:34:21.433529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.436633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.436685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.438107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.438153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.438551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.438914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.438955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.439316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.439673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.439922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.439938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.442969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.443022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.443060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.443102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.443353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.443399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.445029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.446544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.446589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.446938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.446958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.449104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.449149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.449194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.449238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.449488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.449536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.449574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.449612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.449664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.449911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.449926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.451452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.451492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.451540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.451577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.451822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.451872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.451910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.451947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.451984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.452304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.452321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.455003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.455046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.455090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.455128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.455446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.455499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.455537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.455578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.455615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.455858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.455873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.457356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.457396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.457433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.457470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.457715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.457766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.457804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.457841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.457889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.458133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.458154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.460326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.460367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.460409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.460447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.460701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.460748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.460786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.460824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.460869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.461115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.461130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.462642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.462683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.462724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.216 [2024-07-25 13:34:21.462761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.463009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.463068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.463107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.463149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.463186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.463430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.463445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.465545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.465670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.465708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.465745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.466127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.466177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.466218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.466256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.466295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.466541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.466556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.468086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.468126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.468167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.468204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.468515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.468567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.468605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.468642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.468678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.468920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.468935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.470745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.470787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.470836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.470883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.471325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.471373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.471412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.471452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.471490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.471873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.471889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.473317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.473365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.473402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.473440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.473726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.473777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.473815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.473853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.473890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.474181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.474197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.475895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.475936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.475973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.476014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.476376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.476423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.476461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.476499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.476536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.476926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.476944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.478434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.478475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.478513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.478550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.478890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.478942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.478984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.479022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.479059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.479312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.479328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.480866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.480906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.480946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.480985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.481399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.481445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.481483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.481522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.481560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.481969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.481988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.483632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.483672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.483709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.483752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.483996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.484045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.217 [2024-07-25 13:34:21.484085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.484131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.484178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.484466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.484482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.485937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.485978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.486015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.486066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.486504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.486550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.486590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.486629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.486667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.487047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.487063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.488987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.489037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.489074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.489112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.489359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.489411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.489449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.489486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.489523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.489815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.489831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.491255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.491295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.491335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.491372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.491778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.491830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.491872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.491909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.491947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.492351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.492367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.494269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.494309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.494346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.494383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.494624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.494677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.494715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.494752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.494789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.495031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.495046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.496578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.496627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.496667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.496704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.496946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.496996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.497035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.497072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.497110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.497542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.497557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.499589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.499629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.499672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.499710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.499963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.500009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.500046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.500096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.500134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.500382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.500397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.501930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.501970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.502010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.502048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.502298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.502350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.502388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.502425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.502462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.502840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.502855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.505236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.505277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.505314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.505352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.505651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.505706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.505744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.505782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.505819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.506065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.218 [2024-07-25 13:34:21.506080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.507578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.507623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.507660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.507705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.507952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.507998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.508043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.508085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.508122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.508372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.508388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.510583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.510624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.510663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.512323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.512594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.512647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.512685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.512722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.512767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.513013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.513028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.514558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.514599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.514636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.514673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.514918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.514968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.515006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.515936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.515990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.516433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.516453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.518426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.520017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.520078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.521573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.521821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.521867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.522785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.522827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.522864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.523146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.523161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.524795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.525162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.525211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.525566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.525980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.527618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.527667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.527724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.529234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.529481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.529496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.531037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.532555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.532596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.533584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.534001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.534051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.534414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.534470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.534825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.535241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.535258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.536664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.537376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.537419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.537456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.537750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.537803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.539314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.539355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.540865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.541164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.541180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.543556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.545237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.545279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.545323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.545571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.545622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.545680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.547189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.547231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.547476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.547492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.219 [2024-07-25 13:34:21.548958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.550059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.550099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.550477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.550872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.550923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.551296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.551336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.551376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.551761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.551776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.553318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.554831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.556344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.557494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.557881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.558251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.558295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.558333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.558693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.559089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.559105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.561357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.562936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.564534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.566216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.566464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.566951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.567313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.567669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.568025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.568352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.568368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.570763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.571983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.573491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.575001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.575306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.575677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.576034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.576409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.576818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.577065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.577080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.579981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.581540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.583044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.584413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.584800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.585168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.585525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.585880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.587216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.587512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.587527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.590291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.591825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.593336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.594065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.594460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.594825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.595185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.595541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.595911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.596281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.596297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.598995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.599370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.599729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.600085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.600478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.220 [2024-07-25 13:34:21.600844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.601210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.601568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.601927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.602338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.602354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.604795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.605158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.605514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.605876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.606271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.606641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.606997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.607357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.607716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.608156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.608172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.610724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.611093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.611465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.611825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.612176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.612539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.612898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.613266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.613624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.614029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.614045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.616496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.616856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.617216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.617574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.617926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.618303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.618662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.619017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.619374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.619774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.619790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.622213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.622578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.622941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.623308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.623770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.624133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.624493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.624851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.625225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.625589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.625605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.628319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.628689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.629048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.629407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.629801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.630171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.630538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.630896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.631257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.631674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.631690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.634080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.634444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.634802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.635171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.635587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.635956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.636317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.636675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.637033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.637429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.637446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.639979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.640351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.640716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.641074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.641447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.641809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.642174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.642535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.642893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.643314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.643331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.645667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.646030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.646390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.646746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.647088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.647465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.647824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.648183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.648555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.221 [2024-07-25 13:34:21.648964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.648980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.651652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.652014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.652382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.652746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.653216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.653583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.653939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.654302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.654669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.655024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.655040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.657540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.657913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.658274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.658631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.659040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.659526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.660815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.661603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.662599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.663021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.663037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.665564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.665930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.666291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.666647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.667060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.667438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.667801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.668164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.668520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.668954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.668972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.671473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.671836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.672201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.672564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.672914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.673283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.673640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.673995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.674363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.674732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.674748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.678081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.679342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.680831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.682334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.682665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.684317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.685822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.687411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.689076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.689446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.689469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.692874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.694373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.695872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.696939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.697197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.222 [2024-07-25 13:34:21.698453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.699962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.701472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.702154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.702588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.702604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.705835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.707332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.708844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.709621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.709904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.711539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.713040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.714485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.714843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.715253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.715270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.718561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.720072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.720993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.484 [2024-07-25 13:34:21.722621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.722871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.724391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.725891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.726382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.726744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.727142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.727157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.730513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.732166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.733108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.734376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.734628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.736158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.737400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.737757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.738112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.738541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.738557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.741693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.742480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.743988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.745674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.745923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.747438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.747835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.748197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.748556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.748990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.749006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.752170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.753126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.754387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.754430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.754679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.756209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.757264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.757621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.757976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.758410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.758426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.761473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.762174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.763485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.764979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.765232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.766824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.767194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.767238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.767593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.767929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.767946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.771153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.771201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.772298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.772339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.772590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.773872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.773915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.775420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.776303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.776555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.776571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.779749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.779796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.781040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.781082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.781335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.781388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.782898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.783606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.783647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.783895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.783910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.785718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.785764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.786122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.786173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.786596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.787299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.787343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.788595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.788637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.788883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.788898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.791855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.791902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.793405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.794064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.794548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.794913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.794957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.485 [2024-07-25 13:34:21.795314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.795355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.795721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.795736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.797813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.797859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.799126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.800615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.800866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.802215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.802576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.802622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.802977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.803357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.803373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.806550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.806597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.807290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.807331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.807579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.809057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.809107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.810674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.812157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.812548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.812563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.815863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.815910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.815947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.815984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.816236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.816289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.817782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.818571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.818614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.818867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.818882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.820383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.820425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.820463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.820500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.820905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.820951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.820990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.821029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.821067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.821429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.821445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.823221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.823262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.823299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.823336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.823580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.823630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.823668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.823705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.823742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.824097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.824112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.825525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.825572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.825610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.825649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.826054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.826098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.826144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.826183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.826221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.826635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.826653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.828508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.828548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.828585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.828628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.828875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.828921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.828959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.829015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.829053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.829302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.829318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.830845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.830885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.830927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.830964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.831259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.831313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.831351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.831405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.831453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.486 [2024-07-25 13:34:21.831946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.831962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.833947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.833989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.834033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.834070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.834325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.834398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.834437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.834474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.834512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.834758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.834773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.836306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.836347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.836384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.836421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.836665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.836716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.836753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.836791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.836830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.837234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.837250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.839500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.839541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.839578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.839615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.839891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.839943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.839981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.840018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.840055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.840304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.840319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.841799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.841844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.841884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.841924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.842177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.842228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.842265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.842318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.842356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.842600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.842615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.844841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.844882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.844920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.844958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.845236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.845287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.845324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.845362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.845398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.845682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.845697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.847187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.847229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.847274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.847312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.847557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.847606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.847647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.847684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.847721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.847966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.847985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.849974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.850016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.850055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.850093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.850518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.850564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.850610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.850648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.850684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.850932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.850947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.852441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.852481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.852521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.852559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.852840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.852898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.852936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.852973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.487 [2024-07-25 13:34:21.853010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.853257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.853272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.855224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.855266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.855315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.855356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.855799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.855847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.855886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.855928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.855967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.856316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.856331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.857718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.857759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.857797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.857835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.858084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.858131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.858173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.858210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.858254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.858501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.858516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.860236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.860277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.860316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.860354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.860703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.860748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.860785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.860823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.860860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.861261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.861277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.862748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.862788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.862829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.862866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.863226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.863281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.863323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.863360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.863397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.863645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.863661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.865180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.865228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.865265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.865303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.865718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.865766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.865805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.865845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.865883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.866268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.866284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.867944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.867985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.868022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.868059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.868304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.868356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.868394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.868431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.868473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.868804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.868819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.870205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.870246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.870285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.870327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.870716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.870761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.870802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.870841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.870880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.871306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.871322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.873316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.873359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.873409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.873446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.873695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.873753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.873792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.488 [2024-07-25 13:34:21.873829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.873867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.874143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.874159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.875570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.875610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.875647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.875684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.876040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.876113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.876173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.876211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.876250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.876673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.876689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.878672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.878713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.878749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.878786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.879031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.879083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.879120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.879162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.879200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.879449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.879463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.880960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.881000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.881037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.882695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.883073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.883125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.883166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.883204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.883242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.883653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.883670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.885577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.885617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.885654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.885692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.885940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.885991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.886029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.887538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.887580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.888017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.888032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.889436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.889801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.889845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.890204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.890559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.890614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.890969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.891011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.891049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.891308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.891324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.892864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.894145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.894188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.895619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.895984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.896359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.896403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.896447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.896801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.897240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.897256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.898767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.900147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.900189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.901585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.489 [2024-07-25 13:34:21.901870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.901923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.903442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.903485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.904827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.905174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.905190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.907477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.909149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.909188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.909233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.909484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.909528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.911206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.911251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.912647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.912947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.912962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.914495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.914856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.914897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.914936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.915338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.915390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.915429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.915783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.915823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.916194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.916210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.917623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.918569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.918612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.919868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.920123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.920181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.921669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.921710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.921747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.922089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.922105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.924660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.926130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.927746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.929279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.929530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.930250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.930294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.930332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.931581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.931834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.931849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.934028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.934392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.935909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.937264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.937517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.939045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.939698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.941292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.942725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.942977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.942992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.945479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.945848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.946219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.946584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.947019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.947389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.947745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.948104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.948470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.948824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.948840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.951316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.951681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.952039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.952399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.952810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.953180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.953542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.953903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.954264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.954627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.954643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.957133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.957498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.957859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.958223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.958580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.958952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.959314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.959673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.960032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.490 [2024-07-25 13:34:21.960415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.491 [2024-07-25 13:34:21.960434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.491 [2024-07-25 13:34:21.962938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.491 [2024-07-25 13:34:21.963316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.491 [2024-07-25 13:34:21.963680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.491 [2024-07-25 13:34:21.964041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.491 [2024-07-25 13:34:21.964447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.491 [2024-07-25 13:34:21.964818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.491 [2024-07-25 13:34:21.965183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.491 [2024-07-25 13:34:21.965546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.491 [2024-07-25 13:34:21.965912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.491 [2024-07-25 13:34:21.966356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.491 [2024-07-25 13:34:21.966373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.968880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.969249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.969610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.969968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.970363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.970748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.971113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.971483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.971838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.972228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.972245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.974622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.974984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.975352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.975715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.976098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.976466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.976823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.977183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.977545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.977873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.977889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.980372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.980738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.981097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.981457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.981875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.982244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.982604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.982967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.983336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.983746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.983762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.986254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.986618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.986974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.987338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.987748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.988118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.988489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.988847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.989212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.989618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.989635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.992114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.992483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.992846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.993211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.993636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.994008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.994372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.994731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.995091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.995495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.995510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.997962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.754 [2024-07-25 13:34:21.998332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:21.998695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:21.999052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:21.999468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:21.999836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.000203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.000561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.000919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.001312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.001328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.003784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.004151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.004511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.004882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.005261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.005630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.005988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.006360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.006719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.007074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.007090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.009527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.010990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.011360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.012901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.013346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.013718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.014073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.014432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.014793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.015187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.015203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.017768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.018135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.018499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.018855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.019254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.019617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.019977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.020342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.020703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.021110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.021127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.023493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.023855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.024218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.024578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.024961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.025336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.025699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.026058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.026420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.026831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.026851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.029114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.030644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.032327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.033890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.034149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.034533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.034891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.035253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.035611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.035962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.035979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.038204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.039482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.040990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.042494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.042789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.043166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.043526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.043882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.044248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.044500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.044515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.047545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.049185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.050723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.052174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.052502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.052870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.053236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.053595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.054626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.054958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.054974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.057692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.059204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.060708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.061503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.061942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.755 [2024-07-25 13:34:22.062316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.062676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.063033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.064655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.064907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.064922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.068150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.069672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.071099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.071474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.071892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.072265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.072631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.073742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.075008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.075263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.075279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.078210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.079724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.080399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.080761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.081181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.081548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.081911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.083452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.085096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.085352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.085368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.088414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.089772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.090133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.090496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.090862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.091236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.092373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.093630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.095119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.095373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.095388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.098387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.099108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.099475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.099835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.100296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.100662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.102224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.103875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.105488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.105738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.105753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.108706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.109072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.109438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.109482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.109885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.110257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.111537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.112798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.114304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.114556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.114571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.117553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.118095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.118458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.118821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.119240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.119643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.121025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.121070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.122569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.122819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.122835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.125899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.125946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.126814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.126865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.127291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.127661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.127705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.128067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.128474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.128725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.128740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.131656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.131706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.133199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.133241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.133490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.133543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.134870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.756 [2024-07-25 13:34:22.135233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.135277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.135687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.135703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.139045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.139092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.140585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.140626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.141073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.142704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.142745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.144320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.144361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.144611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.144626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.146948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.146994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.147989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.149239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.149490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.151011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.151056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.152100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.152145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.152394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.152414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.154294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.154341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.154699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.155059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.155441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.156853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.158410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.158454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.159951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.160203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.160219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.162960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.163006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.163369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.163414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.163824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.164198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.164245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.164600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.166224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.166474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.166489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.169722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.169775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.169813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.169850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.170099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.170155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.171386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.171747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.171796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.172202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.172218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.173987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.174028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.174065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.174102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.174350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.174402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.174440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.174477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.174524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.174770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.174785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.176343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.176391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.176429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.176466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.176812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.176864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.176903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.176959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.177001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.177444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.177460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.179436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.179483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.179521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.179562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.179810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.179856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.179921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.179960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.179998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.180245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.180261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.757 [2024-07-25 13:34:22.181853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.181893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.181930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.181967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.182217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.182268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.182306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.182344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.182381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.182850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.182865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.185247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.185288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.185328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.185365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.185659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.185711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.185749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.185786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.185823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.186071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.186086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.187611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.187652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.187689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.187740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.187987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.188037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.188077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.188124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.188166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.188411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.188425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.190594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.190635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.190673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.190714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.191005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.191053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.191090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.191128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.191169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.191454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.191469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.192993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.193034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.193077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.193114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.193364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.193415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.193455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.193493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.193530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.193773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.193789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.195754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.195795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.195837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.195875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.196283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.196328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.196367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.196405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.196445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.196690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.196705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.198243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.198284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.198321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.198358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.198653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.198708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.198746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.198783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.198820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.199064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.199079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.758 [2024-07-25 13:34:22.200958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.200999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.201038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.201094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.201510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.201565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.201606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.201644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.201683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.202075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.202090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.203545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.203593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.203631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.203669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.203950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.203997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.204036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.204073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.204110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.204406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.204422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.206065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.206107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.206153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.206191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.206582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.206631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.206682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.206720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.206758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.207162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.207181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.208754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.208795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.208832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.208869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.209173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.209228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.209271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.209310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.209350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.209596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.209611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.211097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.211155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.211193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.211231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.211630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.211675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.211713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.211752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.211790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.212111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.212126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.213938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.213979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.214017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.214055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.214304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.214356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.214394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.214431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.214468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.214854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.214869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.216285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.216325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.216374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.216411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.216728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.216783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.216821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.216858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.216897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.217303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.217320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.219180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.219220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.219258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.219295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.219541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.219596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.759 [2024-07-25 13:34:22.219634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.219672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.219721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.219969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.219984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.221557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.221598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.221636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.221673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.221951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.222003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.222042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.222080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.222132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.222567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.222584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.224587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.224634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.224676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.224720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.224965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.225008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.225054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.225094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.225131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.225379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.225394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.226943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.226983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.227023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.227060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.227313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.227365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.227403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.227441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.227478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.227844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.227859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.230393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.230434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.230479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.230516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.230761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.230814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.230856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.230893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.230930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.231178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.231198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.232728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.232769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.232806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.232843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.233188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.233239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.233293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.233333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.233371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.233793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.233809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.235759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.235805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.235843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.237340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:11.760 [2024-07-25 13:34:22.237591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.237642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.237680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.237717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.237754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.238079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.238095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.239504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.239545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.239582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.239619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.239981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.240038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.240078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.240438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.240484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.240838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.240855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.242642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.244154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.244197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.245283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.245535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.245584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.246838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.246879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.246917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.247169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.247184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.249239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.249605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.249648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.250252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.250502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.252033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.252084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.252129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.253626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.253875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.253890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.255431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.256455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.256496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.256862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.257264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.257320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.257680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.257725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.258080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.258333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.258349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.259889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.261158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.261200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.261237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.261486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.261536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.263050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.263092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.264030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.264391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.264407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.266512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.267771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.267813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.267850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.268098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.268154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.268193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.269723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.269772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.270236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.270252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.271728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.272096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.272144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.272504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.272867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.272923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.020 [2024-07-25 13:34:22.273285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.273326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.273365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.273681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.273697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.275812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.276176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.276536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.276897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.277304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.277672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.277715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.277754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.278121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.278526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.278542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.281092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.281466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.281826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.282192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.282618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.282988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.283353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.283712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.284076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.284439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.284455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.287086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.287457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.287831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.288193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.288625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.288993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.289355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.289717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.290089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.290503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.290520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.293055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.293424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.293793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.294155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.294516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.294885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.295254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.295614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.295986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.296390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.296406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.298909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.299276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.299652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.300014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.300452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.300820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.301183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.301542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.301904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.302268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.302285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.304953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.305333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.305702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.306060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.306505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.306876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.307251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.307615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.307982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.308389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.021 [2024-07-25 13:34:22.308406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:34:12.587 00:34:12.587 Latency(us) 00:34:12.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.587 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:12.587 Verification LBA range: start 0x0 length 0x100 00:34:12.587 crypto_ram : 5.89 43.47 2.72 0.00 0.00 2850262.22 288568.12 2348810.24 00:34:12.587 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:12.587 Verification LBA range: start 0x100 length 0x100 00:34:12.587 crypto_ram : 5.93 43.18 2.70 0.00 0.00 2880084.38 273468.62 2483027.97 00:34:12.587 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:12.587 Verification LBA range: start 0x0 length 0x100 00:34:12.587 crypto_ram1 : 5.89 43.46 2.72 0.00 0.00 2752181.04 288568.12 2160905.42 00:34:12.587 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:12.587 Verification LBA range: start 0x100 length 0x100 00:34:12.587 crypto_ram1 : 5.93 43.18 2.70 0.00 0.00 2780515.53 271790.90 2295123.15 00:34:12.587 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:12.587 Verification LBA range: start 0x0 length 0x100 00:34:12.588 crypto_ram2 : 5.61 296.67 18.54 0.00 0.00 387826.58 77594.62 614046.11 00:34:12.588 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:12.588 Verification LBA range: start 0x100 length 0x100 00:34:12.588 crypto_ram2 : 5.59 280.78 17.55 0.00 0.00 406910.21 19293.80 630823.32 00:34:12.588 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:12.588 Verification LBA range: start 0x0 length 0x100 00:34:12.588 crypto_ram3 : 5.72 309.09 19.32 0.00 0.00 360797.79 51170.51 466406.60 00:34:12.588 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:12.588 Verification LBA range: start 0x100 length 0x100 00:34:12.588 crypto_ram3 : 5.69 292.49 18.28 0.00 0.00 381191.03 46137.34 340577.48 00:34:12.588 =================================================================================================================== 00:34:12.588 Total : 1352.33 84.52 0.00 0.00 707502.06 19293.80 2483027.97 00:34:12.846 00:34:12.846 real 0m8.956s 00:34:12.846 user 0m17.060s 00:34:12.846 sys 0m0.402s 00:34:12.846 13:34:23 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:12.846 13:34:23 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:34:12.846 ************************************ 00:34:12.846 END TEST bdev_verify_big_io 00:34:12.846 ************************************ 00:34:12.846 13:34:23 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:12.846 13:34:23 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:34:12.846 13:34:23 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:12.846 13:34:23 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:12.846 ************************************ 00:34:12.846 START TEST bdev_write_zeroes 00:34:12.846 ************************************ 00:34:12.846 13:34:23 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:12.846 [2024-07-25 13:34:23.285468] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:12.846 [2024-07-25 13:34:23.285519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079712 ] 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:13.103 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.103 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:13.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.104 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:13.104 [2024-07-25 13:34:23.415938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.104 [2024-07-25 13:34:23.498704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.104 [2024-07-25 13:34:23.519954] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:34:13.104 [2024-07-25 13:34:23.527976] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:34:13.104 [2024-07-25 13:34:23.535996] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:34:13.361 [2024-07-25 13:34:23.645023] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:15.891 [2024-07-25 13:34:25.809924] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:15.891 [2024-07-25 13:34:25.809985] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:15.891 [2024-07-25 13:34:25.809999] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:15.891 [2024-07-25 13:34:25.817943] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:15.891 [2024-07-25 13:34:25.817961] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:15.891 [2024-07-25 13:34:25.817971] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:15.891 [2024-07-25 13:34:25.825963] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:15.891 [2024-07-25 13:34:25.825979] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:15.891 [2024-07-25 13:34:25.825990] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:15.891 [2024-07-25 13:34:25.833983] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:15.891 [2024-07-25 13:34:25.833998] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:15.891 [2024-07-25 13:34:25.834008] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:15.891 Running I/O for 1 seconds... 00:34:16.824 00:34:16.824 Latency(us) 00:34:16.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.824 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:16.824 crypto_ram : 1.02 2175.96 8.50 0.00 0.00 58406.21 5164.24 70464.31 00:34:16.824 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:16.824 crypto_ram1 : 1.02 2189.14 8.55 0.00 0.00 57801.73 5138.02 65431.14 00:34:16.824 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:16.824 crypto_ram2 : 1.02 16800.31 65.63 0.00 0.00 7512.42 2254.44 9909.04 00:34:16.824 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:16.824 crypto_ram3 : 1.02 16832.65 65.75 0.00 0.00 7475.79 2254.44 7864.32 00:34:16.824 =================================================================================================================== 00:34:16.824 Total : 37998.06 148.43 0.00 0.00 13331.14 2254.44 70464.31 00:34:16.824 00:34:16.824 real 0m4.037s 00:34:16.824 user 0m3.665s 00:34:16.824 sys 0m0.326s 00:34:16.824 13:34:27 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:16.824 13:34:27 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:16.824 ************************************ 00:34:16.824 END TEST bdev_write_zeroes 00:34:16.824 ************************************ 00:34:16.824 13:34:27 blockdev_crypto_qat -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:16.824 13:34:27 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:34:16.824 13:34:27 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:16.824 13:34:27 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:17.082 ************************************ 00:34:17.082 START TEST bdev_json_nonenclosed 00:34:17.082 ************************************ 00:34:17.082 13:34:27 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:17.082 [2024-07-25 13:34:27.396987] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:17.082 [2024-07-25 13:34:27.397033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080279 ] 00:34:17.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.082 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:17.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.082 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:17.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.083 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:17.083 [2024-07-25 13:34:27.515440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.341 [2024-07-25 13:34:27.599638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.341 [2024-07-25 13:34:27.599699] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:17.341 [2024-07-25 13:34:27.599715] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:17.341 [2024-07-25 13:34:27.599726] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:17.341 00:34:17.341 real 0m0.341s 00:34:17.341 user 0m0.195s 00:34:17.341 sys 0m0.143s 00:34:17.341 13:34:27 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:17.341 13:34:27 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:17.341 ************************************ 00:34:17.341 END TEST bdev_json_nonenclosed 00:34:17.341 ************************************ 00:34:17.341 13:34:27 blockdev_crypto_qat -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:17.341 13:34:27 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:34:17.341 13:34:27 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:17.341 13:34:27 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:17.341 ************************************ 00:34:17.341 START TEST bdev_json_nonarray 00:34:17.341 ************************************ 00:34:17.341 13:34:27 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:17.341 [2024-07-25 13:34:27.827622] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:17.341 [2024-07-25 13:34:27.827679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080480 ] 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.598 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.598 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.598 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.598 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.598 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.598 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.598 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.598 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.598 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.598 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.598 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.598 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:17.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:17.599 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.599 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:17.599 [2024-07-25 13:34:27.961537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.599 [2024-07-25 13:34:28.043336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.599 [2024-07-25 13:34:28.043407] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:17.599 [2024-07-25 13:34:28.043424] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:17.599 [2024-07-25 13:34:28.043434] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:17.857 00:34:17.857 real 0m0.358s 00:34:17.857 user 0m0.203s 00:34:17.857 sys 0m0.153s 00:34:17.857 13:34:28 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:17.857 13:34:28 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 ************************************ 00:34:17.857 END TEST bdev_json_nonarray 00:34:17.857 ************************************ 00:34:17.857 13:34:28 blockdev_crypto_qat -- bdev/blockdev.sh@786 -- # [[ crypto_qat == bdev ]] 00:34:17.857 13:34:28 blockdev_crypto_qat -- bdev/blockdev.sh@793 -- # [[ crypto_qat == gpt ]] 00:34:17.857 13:34:28 blockdev_crypto_qat -- bdev/blockdev.sh@797 -- # [[ crypto_qat == crypto_sw ]] 00:34:17.857 13:34:28 blockdev_crypto_qat -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:34:17.857 13:34:28 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # cleanup 00:34:17.857 13:34:28 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:34:17.857 13:34:28 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:34:17.857 13:34:28 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:34:17.857 13:34:28 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:34:17.857 13:34:28 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:34:17.857 13:34:28 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:34:17.857 00:34:17.857 real 1m10.054s 00:34:17.857 user 2m52.524s 00:34:17.857 sys 0m8.390s 00:34:17.857 13:34:28 blockdev_crypto_qat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:17.857 13:34:28 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 ************************************ 00:34:17.857 END TEST blockdev_crypto_qat 00:34:17.857 ************************************ 00:34:17.857 13:34:28 -- spdk/autotest.sh@364 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:34:17.857 13:34:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:17.857 13:34:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:17.857 13:34:28 -- common/autotest_common.sh@10 -- # set +x 00:34:17.857 ************************************ 00:34:17.857 START TEST chaining 00:34:17.857 ************************************ 00:34:17.857 13:34:28 chaining -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:34:17.857 * Looking for test storage... 00:34:18.114 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:34:18.114 13:34:28 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@7 -- # uname -s 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:18.114 13:34:28 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.114 13:34:28 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.114 13:34:28 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.114 13:34:28 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.114 13:34:28 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.114 13:34:28 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.114 13:34:28 chaining -- paths/export.sh@5 -- # export PATH 00:34:18.114 13:34:28 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@47 -- # : 0 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:18.114 13:34:28 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:18.115 13:34:28 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:34:18.115 13:34:28 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:34:18.115 13:34:28 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:34:18.115 13:34:28 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:34:18.115 13:34:28 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:34:18.115 13:34:28 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:34:18.115 13:34:28 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:18.115 13:34:28 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.115 13:34:28 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:18.115 13:34:28 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:18.115 13:34:28 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:18.115 13:34:28 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.115 13:34:28 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:18.115 13:34:28 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.115 13:34:28 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:34:18.115 13:34:28 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:18.115 13:34:28 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:34:18.115 13:34:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@296 -- # e810=() 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@297 -- # x722=() 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@298 -- # mlx=() 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.084 13:34:36 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:34:28.085 Found 0000:20:00.0 (0x8086 - 0x159b) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:34:28.085 Found 0000:20:00.1 (0x8086 - 0x159b) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:34:28.085 Found net devices under 0000:20:00.0: cvl_0_0 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:34:28.085 Found net devices under 0000:20:00.1: cvl_0_1 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:28.085 13:34:36 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:28.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:34:28.085 00:34:28.085 --- 10.0.0.2 ping statistics --- 00:34:28.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.085 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:34:28.085 00:34:28.085 --- 10.0.0.1 ping statistics --- 00:34:28.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.085 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@422 -- # return 0 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:28.085 13:34:37 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:28.085 13:34:37 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:28.085 13:34:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@481 -- # nvmfpid=1084581 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@482 -- # waitforlisten 1084581 00:34:28.085 13:34:37 chaining -- common/autotest_common.sh@831 -- # '[' -z 1084581 ']' 00:34:28.085 13:34:37 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.085 13:34:37 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:28.085 13:34:37 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.085 13:34:37 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:28.085 13:34:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:28.085 13:34:37 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:28.085 [2024-07-25 13:34:37.175698] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:28.085 [2024-07-25 13:34:37.175754] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.085 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:28.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.085 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:28.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.085 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:28.086 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.086 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:28.086 [2024-07-25 13:34:37.302557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.086 [2024-07-25 13:34:37.388743] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.086 [2024-07-25 13:34:37.388788] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.086 [2024-07-25 13:34:37.388802] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.086 [2024-07-25 13:34:37.388814] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.086 [2024-07-25 13:34:37.388824] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.086 [2024-07-25 13:34:37.388855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.086 13:34:38 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:28.086 13:34:38 chaining -- common/autotest_common.sh@864 -- # return 0 00:34:28.086 13:34:38 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:28.086 13:34:38 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:28.086 13:34:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:28.086 13:34:38 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@69 -- # mktemp 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.5cSbzjoTi6 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@69 -- # mktemp 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.TE6brFwE22 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:34:28.086 13:34:38 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.086 13:34:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:28.086 malloc0 00:34:28.086 true 00:34:28.086 true 00:34:28.086 [2024-07-25 13:34:38.446044] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:34:28.086 crypto0 00:34:28.086 [2024-07-25 13:34:38.454068] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:34:28.086 crypto1 00:34:28.086 [2024-07-25 13:34:38.462187] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.086 [2024-07-25 13:34:38.478399] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.086 13:34:38 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@85 -- # update_stats 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:28.086 13:34:38 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.086 13:34:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:28.086 13:34:38 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:28.086 13:34:38 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:28.086 13:34:38 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.087 13:34:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:28.087 13:34:38 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:28.345 13:34:38 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.345 13:34:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:28.345 13:34:38 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:28.345 13:34:38 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.345 13:34:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:28.345 13:34:38 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.5cSbzjoTi6 bs=1K count=64 00:34:28.345 64+0 records in 00:34:28.345 64+0 records out 00:34:28.345 65536 bytes (66 kB, 64 KiB) copied, 0.00105145 s, 62.3 MB/s 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.5cSbzjoTi6 --ob Nvme0n1 --bs 65536 --count 1 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@25 -- # local config 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:34:28.345 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@31 -- # config='{ 00:34:28.345 "subsystems": [ 00:34:28.345 { 00:34:28.345 "subsystem": "bdev", 00:34:28.345 "config": [ 00:34:28.345 { 00:34:28.345 "method": "bdev_nvme_attach_controller", 00:34:28.345 "params": { 00:34:28.345 "trtype": "tcp", 00:34:28.345 "adrfam": "IPv4", 00:34:28.345 "name": "Nvme0", 00:34:28.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:28.345 "traddr": "10.0.0.2", 00:34:28.345 "trsvcid": "4420" 00:34:28.345 } 00:34:28.345 }, 00:34:28.345 { 00:34:28.345 "method": "bdev_set_options", 00:34:28.345 "params": { 00:34:28.345 "bdev_auto_examine": false 00:34:28.345 } 00:34:28.345 } 00:34:28.345 ] 00:34:28.345 } 00:34:28.345 ] 00:34:28.345 }' 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.5cSbzjoTi6 --ob Nvme0n1 --bs 65536 --count 1 00:34:28.345 13:34:38 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:34:28.345 "subsystems": [ 00:34:28.345 { 00:34:28.345 "subsystem": "bdev", 00:34:28.345 "config": [ 00:34:28.345 { 00:34:28.345 "method": "bdev_nvme_attach_controller", 00:34:28.345 "params": { 00:34:28.345 "trtype": "tcp", 00:34:28.345 "adrfam": "IPv4", 00:34:28.345 "name": "Nvme0", 00:34:28.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:28.345 "traddr": "10.0.0.2", 00:34:28.345 "trsvcid": "4420" 00:34:28.345 } 00:34:28.345 }, 00:34:28.345 { 00:34:28.345 "method": "bdev_set_options", 00:34:28.345 "params": { 00:34:28.345 "bdev_auto_examine": false 00:34:28.345 } 00:34:28.345 } 00:34:28.345 ] 00:34:28.346 } 00:34:28.346 ] 00:34:28.346 }' 00:34:28.346 [2024-07-25 13:34:38.789794] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:28.346 [2024-07-25 13:34:38.789853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084894 ] 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:28.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:28.605 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:28.605 [2024-07-25 13:34:38.924030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.605 [2024-07-25 13:34:39.006859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.120  Copying: 64/64 [kB] (average 15 MBps) 00:34:29.120 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@96 -- # update_stats 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:29.120 13:34:39 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:29.120 13:34:39 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:29.378 13:34:39 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.378 13:34:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:29.378 13:34:39 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:29.378 13:34:39 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.378 13:34:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:29.378 13:34:39 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:29.378 13:34:39 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.378 13:34:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:29.378 13:34:39 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.TE6brFwE22 --ib Nvme0n1 --bs 65536 --count 1 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@25 -- # local config 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:34:29.378 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@31 -- # config='{ 00:34:29.378 "subsystems": [ 00:34:29.378 { 00:34:29.378 "subsystem": "bdev", 00:34:29.378 "config": [ 00:34:29.378 { 00:34:29.378 "method": "bdev_nvme_attach_controller", 00:34:29.378 "params": { 00:34:29.378 "trtype": "tcp", 00:34:29.378 "adrfam": "IPv4", 00:34:29.378 "name": "Nvme0", 00:34:29.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.378 "traddr": "10.0.0.2", 00:34:29.378 "trsvcid": "4420" 00:34:29.378 } 00:34:29.378 }, 00:34:29.378 { 00:34:29.378 "method": "bdev_set_options", 00:34:29.378 "params": { 00:34:29.378 "bdev_auto_examine": false 00:34:29.378 } 00:34:29.378 } 00:34:29.378 ] 00:34:29.378 } 00:34:29.378 ] 00:34:29.378 }' 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.TE6brFwE22 --ib Nvme0n1 --bs 65536 --count 1 00:34:29.378 13:34:39 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:34:29.378 "subsystems": [ 00:34:29.378 { 00:34:29.378 "subsystem": "bdev", 00:34:29.378 "config": [ 00:34:29.378 { 00:34:29.378 "method": "bdev_nvme_attach_controller", 00:34:29.378 "params": { 00:34:29.378 "trtype": "tcp", 00:34:29.378 "adrfam": "IPv4", 00:34:29.378 "name": "Nvme0", 00:34:29.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.378 "traddr": "10.0.0.2", 00:34:29.378 "trsvcid": "4420" 00:34:29.378 } 00:34:29.378 }, 00:34:29.378 { 00:34:29.378 "method": "bdev_set_options", 00:34:29.378 "params": { 00:34:29.378 "bdev_auto_examine": false 00:34:29.378 } 00:34:29.378 } 00:34:29.378 ] 00:34:29.378 } 00:34:29.378 ] 00:34:29.378 }' 00:34:29.378 [2024-07-25 13:34:39.850512] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:29.378 [2024-07-25 13:34:39.850575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085189 ] 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:29.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.637 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:29.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.638 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:29.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.638 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:29.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.638 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:29.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.638 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:29.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:29.638 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:29.638 [2024-07-25 13:34:39.983434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.638 [2024-07-25 13:34:40.069435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.204  Copying: 64/64 [kB] (average 31 MBps) 00:34:30.204 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:30.204 13:34:40 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.204 13:34:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:30.204 13:34:40 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:30.204 13:34:40 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:30.204 13:34:40 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.204 13:34:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:30.464 13:34:40 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:30.464 13:34:40 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.464 13:34:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:30.464 13:34:40 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:30.464 13:34:40 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.464 13:34:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:30.464 13:34:40 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.5cSbzjoTi6 /tmp/tmp.TE6brFwE22 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@25 -- # local config 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:34:30.464 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@31 -- # config='{ 00:34:30.464 "subsystems": [ 00:34:30.464 { 00:34:30.464 "subsystem": "bdev", 00:34:30.464 "config": [ 00:34:30.464 { 00:34:30.464 "method": "bdev_nvme_attach_controller", 00:34:30.464 "params": { 00:34:30.464 "trtype": "tcp", 00:34:30.464 "adrfam": "IPv4", 00:34:30.464 "name": "Nvme0", 00:34:30.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:30.464 "traddr": "10.0.0.2", 00:34:30.464 "trsvcid": "4420" 00:34:30.464 } 00:34:30.464 }, 00:34:30.464 { 00:34:30.464 "method": "bdev_set_options", 00:34:30.464 "params": { 00:34:30.464 "bdev_auto_examine": false 00:34:30.464 } 00:34:30.464 } 00:34:30.464 ] 00:34:30.464 } 00:34:30.464 ] 00:34:30.464 }' 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:34:30.464 "subsystems": [ 00:34:30.464 { 00:34:30.464 "subsystem": "bdev", 00:34:30.464 "config": [ 00:34:30.464 { 00:34:30.464 "method": "bdev_nvme_attach_controller", 00:34:30.464 "params": { 00:34:30.464 "trtype": "tcp", 00:34:30.464 "adrfam": "IPv4", 00:34:30.464 "name": "Nvme0", 00:34:30.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:30.464 "traddr": "10.0.0.2", 00:34:30.464 "trsvcid": "4420" 00:34:30.464 } 00:34:30.464 }, 00:34:30.464 { 00:34:30.464 "method": "bdev_set_options", 00:34:30.464 "params": { 00:34:30.464 "bdev_auto_examine": false 00:34:30.464 } 00:34:30.464 } 00:34:30.464 ] 00:34:30.464 } 00:34:30.464 ] 00:34:30.464 }' 00:34:30.464 13:34:40 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:34:30.464 [2024-07-25 13:34:40.918099] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:30.464 [2024-07-25 13:34:40.918178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085355 ] 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:30.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:30.748 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:30.748 [2024-07-25 13:34:41.050422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.748 [2024-07-25 13:34:41.133394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.281  Copying: 64/64 [kB] (average 62 MBps) 00:34:31.281 00:34:31.281 13:34:41 chaining -- bdev/chaining.sh@106 -- # update_stats 00:34:31.281 13:34:41 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:34:31.281 13:34:41 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:31.281 13:34:41 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:31.281 13:34:41 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:31.281 13:34:41 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:31.281 13:34:41 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:31.281 13:34:41 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:31.282 13:34:41 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:31.282 13:34:41 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.282 13:34:41 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:31.282 13:34:41 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.282 13:34:41 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:34:31.282 13:34:41 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:34:31.282 13:34:41 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:31.282 13:34:41 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:31.282 13:34:41 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:31.282 13:34:41 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:31.282 13:34:41 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:31.282 13:34:41 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:31.282 13:34:41 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:31.282 13:34:41 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.282 13:34:41 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:31.282 13:34:41 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:31.540 13:34:41 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.540 13:34:41 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:31.540 13:34:41 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:31.540 13:34:41 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.540 13:34:41 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:31.540 13:34:41 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.5cSbzjoTi6 --ob Nvme0n1 --bs 4096 --count 16 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@25 -- # local config 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:34:31.540 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@31 -- # config='{ 00:34:31.540 "subsystems": [ 00:34:31.540 { 00:34:31.540 "subsystem": "bdev", 00:34:31.540 "config": [ 00:34:31.540 { 00:34:31.540 "method": "bdev_nvme_attach_controller", 00:34:31.540 "params": { 00:34:31.540 "trtype": "tcp", 00:34:31.540 "adrfam": "IPv4", 00:34:31.540 "name": "Nvme0", 00:34:31.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:31.540 "traddr": "10.0.0.2", 00:34:31.540 "trsvcid": "4420" 00:34:31.540 } 00:34:31.540 }, 00:34:31.540 { 00:34:31.540 "method": "bdev_set_options", 00:34:31.540 "params": { 00:34:31.540 "bdev_auto_examine": false 00:34:31.540 } 00:34:31.540 } 00:34:31.540 ] 00:34:31.540 } 00:34:31.540 ] 00:34:31.540 }' 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.5cSbzjoTi6 --ob Nvme0n1 --bs 4096 --count 16 00:34:31.540 13:34:41 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:34:31.540 "subsystems": [ 00:34:31.540 { 00:34:31.540 "subsystem": "bdev", 00:34:31.540 "config": [ 00:34:31.540 { 00:34:31.540 "method": "bdev_nvme_attach_controller", 00:34:31.540 "params": { 00:34:31.540 "trtype": "tcp", 00:34:31.540 "adrfam": "IPv4", 00:34:31.540 "name": "Nvme0", 00:34:31.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:31.540 "traddr": "10.0.0.2", 00:34:31.540 "trsvcid": "4420" 00:34:31.540 } 00:34:31.540 }, 00:34:31.540 { 00:34:31.540 "method": "bdev_set_options", 00:34:31.540 "params": { 00:34:31.540 "bdev_auto_examine": false 00:34:31.540 } 00:34:31.540 } 00:34:31.540 ] 00:34:31.540 } 00:34:31.540 ] 00:34:31.540 }' 00:34:31.540 [2024-07-25 13:34:41.951757] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:31.540 [2024-07-25 13:34:41.951816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085513 ] 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:31.540 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:31.540 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:31.798 [2024-07-25 13:34:42.082032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:31.798 [2024-07-25 13:34:42.164124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.314  Copying: 64/64 [kB] (average 9142 kBps) 00:34:32.314 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@114 -- # update_stats 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:32.314 13:34:42 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:32.314 13:34:42 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:32.573 13:34:42 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.573 13:34:42 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:32.573 13:34:42 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:32.573 13:34:42 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.573 13:34:42 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:32.573 13:34:42 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:32.573 13:34:42 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.573 13:34:42 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:32.573 13:34:42 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@117 -- # : 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.TE6brFwE22 --ib Nvme0n1 --bs 4096 --count 16 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@25 -- # local config 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:34:32.573 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@31 -- # config='{ 00:34:32.573 "subsystems": [ 00:34:32.573 { 00:34:32.573 "subsystem": "bdev", 00:34:32.573 "config": [ 00:34:32.573 { 00:34:32.573 "method": "bdev_nvme_attach_controller", 00:34:32.573 "params": { 00:34:32.573 "trtype": "tcp", 00:34:32.573 "adrfam": "IPv4", 00:34:32.573 "name": "Nvme0", 00:34:32.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:32.573 "traddr": "10.0.0.2", 00:34:32.573 "trsvcid": "4420" 00:34:32.573 } 00:34:32.573 }, 00:34:32.573 { 00:34:32.573 "method": "bdev_set_options", 00:34:32.573 "params": { 00:34:32.573 "bdev_auto_examine": false 00:34:32.573 } 00:34:32.573 } 00:34:32.573 ] 00:34:32.573 } 00:34:32.573 ] 00:34:32.573 }' 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.TE6brFwE22 --ib Nvme0n1 --bs 4096 --count 16 00:34:32.573 13:34:42 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:34:32.573 "subsystems": [ 00:34:32.573 { 00:34:32.573 "subsystem": "bdev", 00:34:32.573 "config": [ 00:34:32.573 { 00:34:32.573 "method": "bdev_nvme_attach_controller", 00:34:32.573 "params": { 00:34:32.573 "trtype": "tcp", 00:34:32.573 "adrfam": "IPv4", 00:34:32.573 "name": "Nvme0", 00:34:32.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:32.573 "traddr": "10.0.0.2", 00:34:32.573 "trsvcid": "4420" 00:34:32.573 } 00:34:32.573 }, 00:34:32.573 { 00:34:32.573 "method": "bdev_set_options", 00:34:32.573 "params": { 00:34:32.573 "bdev_auto_examine": false 00:34:32.573 } 00:34:32.573 } 00:34:32.573 ] 00:34:32.573 } 00:34:32.573 ] 00:34:32.573 }' 00:34:32.573 [2024-07-25 13:34:43.041806] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:32.573 [2024-07-25 13:34:43.041852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085803 ] 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:32.832 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.832 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:32.832 [2024-07-25 13:34:43.157328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.832 [2024-07-25 13:34:43.239607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:33.404  Copying: 64/64 [kB] (average 719 kBps) 00:34:33.404 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:33.404 13:34:43 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.404 13:34:43 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:33.404 13:34:43 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:33.404 13:34:43 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:33.663 13:34:43 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.663 13:34:43 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:33.663 13:34:43 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:33.663 13:34:43 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.663 13:34:43 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:33.663 13:34:43 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:33.663 13:34:43 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:33.663 13:34:43 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.663 13:34:43 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:33.663 13:34:44 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.663 13:34:44 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:34:33.663 13:34:44 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.5cSbzjoTi6 /tmp/tmp.TE6brFwE22 00:34:33.663 13:34:44 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:34:33.663 13:34:44 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:34:33.663 13:34:44 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.5cSbzjoTi6 /tmp/tmp.TE6brFwE22 00:34:33.663 13:34:44 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:34:33.663 13:34:44 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:33.663 13:34:44 chaining -- nvmf/common.sh@117 -- # sync 00:34:33.663 13:34:44 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:33.663 13:34:44 chaining -- nvmf/common.sh@120 -- # set +e 00:34:33.663 13:34:44 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:33.663 13:34:44 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:33.663 rmmod nvme_tcp 00:34:33.663 rmmod nvme_fabrics 00:34:33.663 rmmod nvme_keyring 00:34:33.663 13:34:44 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:33.663 13:34:44 chaining -- nvmf/common.sh@124 -- # set -e 00:34:33.663 13:34:44 chaining -- nvmf/common.sh@125 -- # return 0 00:34:33.663 13:34:44 chaining -- nvmf/common.sh@489 -- # '[' -n 1084581 ']' 00:34:33.663 13:34:44 chaining -- nvmf/common.sh@490 -- # killprocess 1084581 00:34:33.663 13:34:44 chaining -- common/autotest_common.sh@950 -- # '[' -z 1084581 ']' 00:34:33.663 13:34:44 chaining -- common/autotest_common.sh@954 -- # kill -0 1084581 00:34:33.663 13:34:44 chaining -- common/autotest_common.sh@955 -- # uname 00:34:33.663 13:34:44 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:33.663 13:34:44 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1084581 00:34:33.921 13:34:44 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:33.921 13:34:44 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:33.921 13:34:44 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1084581' 00:34:33.921 killing process with pid 1084581 00:34:33.921 13:34:44 chaining -- common/autotest_common.sh@969 -- # kill 1084581 00:34:33.921 13:34:44 chaining -- common/autotest_common.sh@974 -- # wait 1084581 00:34:33.921 13:34:44 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:33.921 13:34:44 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:33.921 13:34:44 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:33.921 13:34:44 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:33.921 13:34:44 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:33.921 13:34:44 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.921 13:34:44 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:33.921 13:34:44 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.452 13:34:46 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:36.452 13:34:46 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:34:36.452 13:34:46 chaining -- bdev/chaining.sh@132 -- # bperfpid=1086387 00:34:36.452 13:34:46 chaining -- bdev/chaining.sh@134 -- # waitforlisten 1086387 00:34:36.452 13:34:46 chaining -- common/autotest_common.sh@831 -- # '[' -z 1086387 ']' 00:34:36.452 13:34:46 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.452 13:34:46 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:36.452 13:34:46 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.452 13:34:46 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:36.452 13:34:46 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:36.452 13:34:46 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:34:36.452 [2024-07-25 13:34:46.480843] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:36.452 [2024-07-25 13:34:46.480910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086387 ] 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:36.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:36.452 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:36.452 [2024-07-25 13:34:46.611873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.452 [2024-07-25 13:34:46.697850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:37.019 13:34:47 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:37.019 13:34:47 chaining -- common/autotest_common.sh@864 -- # return 0 00:34:37.019 13:34:47 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:34:37.019 13:34:47 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.019 13:34:47 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:37.019 malloc0 00:34:37.019 true 00:34:37.019 true 00:34:37.019 [2024-07-25 13:34:47.442019] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:34:37.019 crypto0 00:34:37.019 [2024-07-25 13:34:47.450044] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:34:37.019 crypto1 00:34:37.019 13:34:47 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.019 13:34:47 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:37.277 Running I/O for 5 seconds... 00:34:42.543 00:34:42.543 Latency(us) 00:34:42.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.543 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:34:42.543 Verification LBA range: start 0x0 length 0x2000 00:34:42.543 crypto1 : 5.01 12451.38 48.64 0.00 0.00 20502.67 5924.45 13421.77 00:34:42.543 =================================================================================================================== 00:34:42.543 Total : 12451.38 48.64 0.00 0.00 20502.67 5924.45 13421.77 00:34:42.543 0 00:34:42.543 13:34:52 chaining -- bdev/chaining.sh@146 -- # killprocess 1086387 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@950 -- # '[' -z 1086387 ']' 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@954 -- # kill -0 1086387 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@955 -- # uname 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1086387 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1086387' 00:34:42.543 killing process with pid 1086387 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@969 -- # kill 1086387 00:34:42.543 Received shutdown signal, test time was about 5.000000 seconds 00:34:42.543 00:34:42.543 Latency(us) 00:34:42.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.543 =================================================================================================================== 00:34:42.543 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@974 -- # wait 1086387 00:34:42.543 13:34:52 chaining -- bdev/chaining.sh@152 -- # bperfpid=1087437 00:34:42.543 13:34:52 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:34:42.543 13:34:52 chaining -- bdev/chaining.sh@154 -- # waitforlisten 1087437 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@831 -- # '[' -z 1087437 ']' 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:42.543 13:34:52 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:42.543 [2024-07-25 13:34:52.961219] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:42.543 [2024-07-25 13:34:52.961353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087437 ] 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.802 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:42.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.803 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:42.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.803 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:42.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.803 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:42.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.803 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:42.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.803 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:42.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.803 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:42.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:42.803 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:42.803 [2024-07-25 13:34:53.165252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.803 [2024-07-25 13:34:53.247504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.369 13:34:53 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:43.369 13:34:53 chaining -- common/autotest_common.sh@864 -- # return 0 00:34:43.369 13:34:53 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:34:43.369 13:34:53 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.369 13:34:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:43.626 malloc0 00:34:43.626 true 00:34:43.626 true 00:34:43.626 [2024-07-25 13:34:53.921176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:34:43.626 [2024-07-25 13:34:53.921221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:43.626 [2024-07-25 13:34:53.921240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a0f3b0 00:34:43.626 [2024-07-25 13:34:53.921251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:43.626 [2024-07-25 13:34:53.922240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:43.626 [2024-07-25 13:34:53.922263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:34:43.626 pt0 00:34:43.626 [2024-07-25 13:34:53.929202] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:34:43.626 crypto0 00:34:43.626 [2024-07-25 13:34:53.937222] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:34:43.626 crypto1 00:34:43.626 13:34:53 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.626 13:34:53 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:43.626 Running I/O for 5 seconds... 00:34:48.888 00:34:48.888 Latency(us) 00:34:48.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:48.888 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:34:48.888 Verification LBA range: start 0x0 length 0x2000 00:34:48.888 crypto1 : 5.01 9719.46 37.97 0.00 0.00 26256.01 3460.30 15938.36 00:34:48.888 =================================================================================================================== 00:34:48.888 Total : 9719.46 37.97 0.00 0.00 26256.01 3460.30 15938.36 00:34:48.888 0 00:34:48.888 13:34:59 chaining -- bdev/chaining.sh@167 -- # killprocess 1087437 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@950 -- # '[' -z 1087437 ']' 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@954 -- # kill -0 1087437 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@955 -- # uname 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1087437 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1087437' 00:34:48.888 killing process with pid 1087437 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@969 -- # kill 1087437 00:34:48.888 Received shutdown signal, test time was about 5.000000 seconds 00:34:48.888 00:34:48.888 Latency(us) 00:34:48.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:48.888 =================================================================================================================== 00:34:48.888 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@974 -- # wait 1087437 00:34:48.888 13:34:59 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:34:48.888 13:34:59 chaining -- bdev/chaining.sh@170 -- # killprocess 1087437 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@950 -- # '[' -z 1087437 ']' 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@954 -- # kill -0 1087437 00:34:48.888 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1087437) - No such process 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@977 -- # echo 'Process with pid 1087437 is not found' 00:34:48.888 Process with pid 1087437 is not found 00:34:48.888 13:34:59 chaining -- bdev/chaining.sh@171 -- # wait 1087437 00:34:48.888 13:34:59 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:34:48.888 13:34:59 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@296 -- # e810=() 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@297 -- # x722=() 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@298 -- # mlx=() 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:34:48.888 Found 0000:20:00.0 (0x8086 - 0x159b) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:34:48.888 Found 0000:20:00.1 (0x8086 - 0x159b) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:34:48.888 Found net devices under 0000:20:00.0: cvl_0_0 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.888 13:34:59 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:34:48.889 Found net devices under 0000:20:00.1: cvl_0_1 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.889 13:34:59 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:49.146 13:34:59 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:49.146 13:34:59 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:49.146 13:34:59 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:49.146 13:34:59 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:49.146 13:34:59 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:49.146 13:34:59 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:49.146 13:34:59 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:49.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:49.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:34:49.146 00:34:49.146 --- 10.0.0.2 ping statistics --- 00:34:49.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.146 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:34:49.146 13:34:59 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:49.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:49.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:34:49.146 00:34:49.146 --- 10.0.0.1 ping statistics --- 00:34:49.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:49.146 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:34:49.146 13:34:59 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:49.147 13:34:59 chaining -- nvmf/common.sh@422 -- # return 0 00:34:49.147 13:34:59 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:49.147 13:34:59 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:49.147 13:34:59 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:49.147 13:34:59 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:49.147 13:34:59 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:49.147 13:34:59 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:49.147 13:34:59 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:49.147 13:34:59 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:34:49.147 13:34:59 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:49.147 13:34:59 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:49.147 13:34:59 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:49.147 13:34:59 chaining -- nvmf/common.sh@481 -- # nvmfpid=1088528 00:34:49.147 13:34:59 chaining -- nvmf/common.sh@482 -- # waitforlisten 1088528 00:34:49.147 13:34:59 chaining -- common/autotest_common.sh@831 -- # '[' -z 1088528 ']' 00:34:49.147 13:34:59 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:49.147 13:34:59 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:49.147 13:34:59 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:49.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:49.147 13:34:59 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:49.147 13:34:59 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:49.147 13:34:59 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:49.405 [2024-07-25 13:34:59.689169] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:49.405 [2024-07-25 13:34:59.689228] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:49.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:49.405 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:49.405 [2024-07-25 13:34:59.815195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.663 [2024-07-25 13:34:59.902560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:49.663 [2024-07-25 13:34:59.902601] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:49.663 [2024-07-25 13:34:59.902615] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:49.663 [2024-07-25 13:34:59.902627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:49.663 [2024-07-25 13:34:59.902636] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:49.663 [2024-07-25 13:34:59.902667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@864 -- # return 0 00:34:50.228 13:35:00 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:50.228 13:35:00 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:50.228 13:35:00 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:50.228 malloc0 00:34:50.228 [2024-07-25 13:35:00.642300] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.228 [2024-07-25 13:35:00.658504] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.228 13:35:00 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:34:50.228 13:35:00 chaining -- bdev/chaining.sh@189 -- # bperfpid=1088802 00:34:50.228 13:35:00 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:34:50.228 13:35:00 chaining -- bdev/chaining.sh@191 -- # waitforlisten 1088802 /var/tmp/bperf.sock 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@831 -- # '[' -z 1088802 ']' 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:50.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:50.228 13:35:00 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:50.228 [2024-07-25 13:35:00.705062] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:50.228 [2024-07-25 13:35:00.705106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088802 ] 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:50.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.499 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:50.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.500 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:50.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:50.500 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:50.500 [2024-07-25 13:35:00.823128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.500 [2024-07-25 13:35:00.910814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.837 13:35:01 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:50.837 13:35:01 chaining -- common/autotest_common.sh@864 -- # return 0 00:34:50.837 13:35:01 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:34:50.837 13:35:01 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:34:51.111 [2024-07-25 13:35:01.559968] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:34:51.111 nvme0n1 00:34:51.111 true 00:34:51.111 crypto0 00:34:51.111 13:35:01 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:51.369 Running I/O for 5 seconds... 00:34:56.631 00:34:56.631 Latency(us) 00:34:56.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.631 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:34:56.631 Verification LBA range: start 0x0 length 0x2000 00:34:56.631 crypto0 : 5.02 9411.11 36.76 0.00 0.00 27114.13 3171.94 21810.38 00:34:56.631 =================================================================================================================== 00:34:56.631 Total : 9411.11 36.76 0.00 0.00 27114.13 3171.94 21810.38 00:34:56.631 0 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@205 -- # sequence=94456 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:34:56.631 13:35:06 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:34:56.632 13:35:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:56.632 13:35:06 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:56.632 13:35:06 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:56.632 13:35:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:34:56.632 13:35:06 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:56.632 13:35:06 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:34:56.632 13:35:06 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:56.632 13:35:06 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:56.889 13:35:07 chaining -- bdev/chaining.sh@206 -- # encrypt=47228 00:34:56.889 13:35:07 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:34:56.889 13:35:07 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:34:56.889 13:35:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:56.889 13:35:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:56.889 13:35:07 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:56.889 13:35:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:34:56.889 13:35:07 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:56.889 13:35:07 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:34:56.889 13:35:07 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:56.889 13:35:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@207 -- # decrypt=47228 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@208 -- # crc32c=94456 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:34:57.147 13:35:07 chaining -- bdev/chaining.sh@214 -- # killprocess 1088802 00:34:57.147 13:35:07 chaining -- common/autotest_common.sh@950 -- # '[' -z 1088802 ']' 00:34:57.147 13:35:07 chaining -- common/autotest_common.sh@954 -- # kill -0 1088802 00:34:57.147 13:35:07 chaining -- common/autotest_common.sh@955 -- # uname 00:34:57.147 13:35:07 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:57.147 13:35:07 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1088802 00:34:57.405 13:35:07 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:57.405 13:35:07 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:57.405 13:35:07 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1088802' 00:34:57.405 killing process with pid 1088802 00:34:57.405 13:35:07 chaining -- common/autotest_common.sh@969 -- # kill 1088802 00:34:57.405 Received shutdown signal, test time was about 5.000000 seconds 00:34:57.405 00:34:57.405 Latency(us) 00:34:57.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.405 =================================================================================================================== 00:34:57.405 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.405 13:35:07 chaining -- common/autotest_common.sh@974 -- # wait 1088802 00:34:57.405 13:35:07 chaining -- bdev/chaining.sh@219 -- # bperfpid=1089885 00:34:57.405 13:35:07 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:34:57.405 13:35:07 chaining -- bdev/chaining.sh@221 -- # waitforlisten 1089885 /var/tmp/bperf.sock 00:34:57.405 13:35:07 chaining -- common/autotest_common.sh@831 -- # '[' -z 1089885 ']' 00:34:57.405 13:35:07 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.405 13:35:07 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:57.405 13:35:07 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.405 13:35:07 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:57.405 13:35:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:57.664 [2024-07-25 13:35:07.927415] Starting SPDK v24.09-pre git sha1 325310f6a / DPDK 24.03.0 initialization... 00:34:57.664 [2024-07-25 13:35:07.927479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089885 ] 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:57.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:57.664 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:57.664 [2024-07-25 13:35:08.061022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.664 [2024-07-25 13:35:08.140129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.597 13:35:08 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:58.597 13:35:08 chaining -- common/autotest_common.sh@864 -- # return 0 00:34:58.597 13:35:08 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:34:58.597 13:35:08 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:34:58.855 [2024-07-25 13:35:09.212088] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:34:58.855 nvme0n1 00:34:58.855 true 00:34:58.855 crypto0 00:34:58.855 13:35:09 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:58.855 Running I/O for 5 seconds... 00:35:04.120 00:35:04.120 Latency(us) 00:35:04.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.120 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:35:04.120 Verification LBA range: start 0x0 length 0x200 00:35:04.120 crypto0 : 5.01 1915.42 119.71 0.00 0.00 16363.94 1494.22 17616.08 00:35:04.120 =================================================================================================================== 00:35:04.120 Total : 1915.42 119.71 0.00 0.00 16363.94 1494.22 17616.08 00:35:04.120 0 00:35:04.120 13:35:14 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:35:04.120 13:35:14 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:35:04.120 13:35:14 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:04.120 13:35:14 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:35:04.120 13:35:14 chaining -- bdev/chaining.sh@39 -- # opcode= 00:35:04.120 13:35:14 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:04.120 13:35:14 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:35:04.120 13:35:14 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:35:04.120 13:35:14 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:35:04.120 13:35:14 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:04.120 13:35:14 chaining -- bdev/chaining.sh@233 -- # sequence=19180 00:35:04.377 13:35:14 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:35:04.377 13:35:14 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:35:04.377 13:35:14 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:04.377 13:35:14 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:04.377 13:35:14 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:35:04.377 13:35:14 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:04.377 13:35:14 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:35:04.377 13:35:14 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:04.377 13:35:14 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:35:04.378 13:35:14 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:04.378 13:35:14 chaining -- bdev/chaining.sh@234 -- # encrypt=9590 00:35:04.378 13:35:14 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:35:04.378 13:35:14 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:35:04.378 13:35:14 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:04.378 13:35:14 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:04.378 13:35:14 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:35:04.378 13:35:14 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:04.378 13:35:14 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:35:04.378 13:35:14 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:04.378 13:35:14 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:35:04.378 13:35:14 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:04.635 13:35:15 chaining -- bdev/chaining.sh@235 -- # decrypt=9590 00:35:04.635 13:35:15 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:35:04.635 13:35:15 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:35:04.635 13:35:15 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:35:04.635 13:35:15 chaining -- bdev/chaining.sh@39 -- # event=executed 00:35:04.635 13:35:15 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:35:04.635 13:35:15 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:35:04.635 13:35:15 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:35:04.635 13:35:15 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:35:04.635 13:35:15 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:35:04.635 13:35:15 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:04.893 13:35:15 chaining -- bdev/chaining.sh@236 -- # crc32c=19180 00:35:04.893 13:35:15 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:35:04.893 13:35:15 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:35:04.893 13:35:15 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:35:04.893 13:35:15 chaining -- bdev/chaining.sh@242 -- # killprocess 1089885 00:35:04.893 13:35:15 chaining -- common/autotest_common.sh@950 -- # '[' -z 1089885 ']' 00:35:04.893 13:35:15 chaining -- common/autotest_common.sh@954 -- # kill -0 1089885 00:35:04.893 13:35:15 chaining -- common/autotest_common.sh@955 -- # uname 00:35:04.893 13:35:15 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:04.893 13:35:15 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1089885 00:35:04.893 13:35:15 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:05.151 13:35:15 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:05.151 13:35:15 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1089885' 00:35:05.151 killing process with pid 1089885 00:35:05.151 13:35:15 chaining -- common/autotest_common.sh@969 -- # kill 1089885 00:35:05.151 Received shutdown signal, test time was about 5.000000 seconds 00:35:05.151 00:35:05.151 Latency(us) 00:35:05.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.151 =================================================================================================================== 00:35:05.151 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:05.151 13:35:15 chaining -- common/autotest_common.sh@974 -- # wait 1089885 00:35:05.151 13:35:15 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:35:05.151 13:35:15 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:05.151 13:35:15 chaining -- nvmf/common.sh@117 -- # sync 00:35:05.151 13:35:15 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:05.151 13:35:15 chaining -- nvmf/common.sh@120 -- # set +e 00:35:05.151 13:35:15 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:05.151 13:35:15 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:05.151 rmmod nvme_tcp 00:35:05.151 rmmod nvme_fabrics 00:35:05.151 rmmod nvme_keyring 00:35:05.409 13:35:15 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:05.409 13:35:15 chaining -- nvmf/common.sh@124 -- # set -e 00:35:05.409 13:35:15 chaining -- nvmf/common.sh@125 -- # return 0 00:35:05.409 13:35:15 chaining -- nvmf/common.sh@489 -- # '[' -n 1088528 ']' 00:35:05.409 13:35:15 chaining -- nvmf/common.sh@490 -- # killprocess 1088528 00:35:05.409 13:35:15 chaining -- common/autotest_common.sh@950 -- # '[' -z 1088528 ']' 00:35:05.409 13:35:15 chaining -- common/autotest_common.sh@954 -- # kill -0 1088528 00:35:05.409 13:35:15 chaining -- common/autotest_common.sh@955 -- # uname 00:35:05.409 13:35:15 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:05.409 13:35:15 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1088528 00:35:05.409 13:35:15 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:05.409 13:35:15 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:05.409 13:35:15 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1088528' 00:35:05.409 killing process with pid 1088528 00:35:05.409 13:35:15 chaining -- common/autotest_common.sh@969 -- # kill 1088528 00:35:05.409 13:35:15 chaining -- common/autotest_common.sh@974 -- # wait 1088528 00:35:05.667 13:35:15 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:05.667 13:35:15 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:05.667 13:35:15 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:05.667 13:35:15 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:05.667 13:35:15 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:05.667 13:35:15 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.667 13:35:15 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:05.667 13:35:15 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.567 13:35:17 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:07.567 13:35:17 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:35:07.567 00:35:07.567 real 0m49.726s 00:35:07.567 user 0m59.568s 00:35:07.567 sys 0m12.980s 00:35:07.567 13:35:17 chaining -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:07.567 13:35:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:35:07.567 ************************************ 00:35:07.567 END TEST chaining 00:35:07.567 ************************************ 00:35:07.567 13:35:18 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:35:07.567 13:35:18 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:35:07.567 13:35:18 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:35:07.567 13:35:18 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:35:07.567 13:35:18 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:35:07.567 13:35:18 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:35:07.567 13:35:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:07.567 13:35:18 -- common/autotest_common.sh@10 -- # set +x 00:35:07.567 13:35:18 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:35:07.567 13:35:18 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:35:07.567 13:35:18 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:35:07.567 13:35:18 -- common/autotest_common.sh@10 -- # set +x 00:35:14.170 INFO: APP EXITING 00:35:14.170 INFO: killing all VMs 00:35:14.170 INFO: killing vhost app 00:35:14.170 INFO: EXIT DONE 00:35:18.359 Waiting for block devices as requested 00:35:18.359 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:18.359 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:18.359 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:18.359 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:18.359 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:18.359 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:18.359 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:18.617 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:18.617 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:18.617 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:18.875 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:18.875 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:18.875 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:19.133 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:19.133 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:19.133 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:19.391 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:35:24.657 Cleaning 00:35:24.657 Removing: /var/run/dpdk/spdk0/config 00:35:24.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:24.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:24.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:24.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:24.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:24.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:24.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:24.657 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:24.657 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:24.657 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:24.657 Removing: /dev/shm/nvmf_trace.0 00:35:24.657 Removing: /dev/shm/spdk_tgt_trace.pid772818 00:35:24.657 Removing: /var/run/dpdk/spdk0 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1006904 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1009603 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1017151 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1019848 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1024822 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1025337 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1025861 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1026240 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1026807 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1027713 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1028697 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1029175 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1031312 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1033540 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1036366 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1038054 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1040288 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1042579 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1044715 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1046402 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1047200 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1047747 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1050109 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1052615 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1054869 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1056201 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1057777 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1058333 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1058529 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1058669 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1058962 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1059110 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1060482 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1062475 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1064453 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1065436 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1066877 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1067157 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1067300 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1067455 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1068576 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1069193 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1069676 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1072227 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1074534 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1076804 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1078133 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1079712 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1080279 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1080480 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1084894 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1085189 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1085355 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1085513 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1085803 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1086387 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1087437 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1088802 00:35:24.657 Removing: /var/run/dpdk/spdk_pid1089885 00:35:24.657 Removing: /var/run/dpdk/spdk_pid767784 00:35:24.657 Removing: /var/run/dpdk/spdk_pid771542 00:35:24.657 Removing: /var/run/dpdk/spdk_pid772818 00:35:24.657 Removing: /var/run/dpdk/spdk_pid773444 00:35:24.657 Removing: /var/run/dpdk/spdk_pid774439 00:35:24.657 Removing: /var/run/dpdk/spdk_pid774718 00:35:24.658 Removing: /var/run/dpdk/spdk_pid775652 00:35:24.658 Removing: /var/run/dpdk/spdk_pid775906 00:35:24.658 Removing: /var/run/dpdk/spdk_pid776282 00:35:24.658 Removing: /var/run/dpdk/spdk_pid779871 00:35:24.658 Removing: /var/run/dpdk/spdk_pid781852 00:35:24.658 Removing: /var/run/dpdk/spdk_pid782165 00:35:24.658 Removing: /var/run/dpdk/spdk_pid782556 00:35:24.658 Removing: /var/run/dpdk/spdk_pid783042 00:35:24.658 Removing: /var/run/dpdk/spdk_pid783392 00:35:24.658 Removing: /var/run/dpdk/spdk_pid783679 00:35:24.658 Removing: /var/run/dpdk/spdk_pid783882 00:35:24.658 Removing: /var/run/dpdk/spdk_pid784153 00:35:24.658 Removing: /var/run/dpdk/spdk_pid784924 00:35:24.658 Removing: /var/run/dpdk/spdk_pid788271 00:35:24.658 Removing: /var/run/dpdk/spdk_pid788553 00:35:24.658 Removing: /var/run/dpdk/spdk_pid788875 00:35:24.658 Removing: /var/run/dpdk/spdk_pid789177 00:35:24.658 Removing: /var/run/dpdk/spdk_pid789252 00:35:24.658 Removing: /var/run/dpdk/spdk_pid789518 00:35:24.658 Removing: /var/run/dpdk/spdk_pid789802 00:35:24.658 Removing: /var/run/dpdk/spdk_pid790081 00:35:24.658 Removing: /var/run/dpdk/spdk_pid790367 00:35:24.658 Removing: /var/run/dpdk/spdk_pid790646 00:35:24.658 Removing: /var/run/dpdk/spdk_pid790932 00:35:24.658 Removing: /var/run/dpdk/spdk_pid791215 00:35:24.658 Removing: /var/run/dpdk/spdk_pid791501 00:35:24.658 Removing: /var/run/dpdk/spdk_pid791783 00:35:24.658 Removing: /var/run/dpdk/spdk_pid792067 00:35:24.658 Removing: /var/run/dpdk/spdk_pid792350 00:35:24.658 Removing: /var/run/dpdk/spdk_pid792633 00:35:24.658 Removing: /var/run/dpdk/spdk_pid792914 00:35:24.658 Removing: /var/run/dpdk/spdk_pid793198 00:35:24.658 Removing: /var/run/dpdk/spdk_pid793477 00:35:24.658 Removing: /var/run/dpdk/spdk_pid793763 00:35:24.658 Removing: /var/run/dpdk/spdk_pid794042 00:35:24.658 Removing: /var/run/dpdk/spdk_pid794330 00:35:24.658 Removing: /var/run/dpdk/spdk_pid794609 00:35:24.658 Removing: /var/run/dpdk/spdk_pid794897 00:35:24.658 Removing: /var/run/dpdk/spdk_pid795181 00:35:24.658 Removing: /var/run/dpdk/spdk_pid795467 00:35:24.658 Removing: /var/run/dpdk/spdk_pid795746 00:35:24.658 Removing: /var/run/dpdk/spdk_pid796072 00:35:24.658 Removing: /var/run/dpdk/spdk_pid796459 00:35:24.658 Removing: /var/run/dpdk/spdk_pid797254 00:35:24.658 Removing: /var/run/dpdk/spdk_pid797685 00:35:24.658 Removing: /var/run/dpdk/spdk_pid798140 00:35:24.658 Removing: /var/run/dpdk/spdk_pid798521 00:35:24.658 Removing: /var/run/dpdk/spdk_pid798830 00:35:24.658 Removing: /var/run/dpdk/spdk_pid799344 00:35:24.658 Removing: /var/run/dpdk/spdk_pid799420 00:35:24.658 Removing: /var/run/dpdk/spdk_pid799843 00:35:24.658 Removing: /var/run/dpdk/spdk_pid800403 00:35:24.658 Removing: /var/run/dpdk/spdk_pid800864 00:35:24.658 Removing: /var/run/dpdk/spdk_pid800975 00:35:24.658 Removing: /var/run/dpdk/spdk_pid806022 00:35:24.658 Removing: /var/run/dpdk/spdk_pid808103 00:35:24.658 Removing: /var/run/dpdk/spdk_pid810296 00:35:24.658 Removing: /var/run/dpdk/spdk_pid811439 00:35:24.658 Removing: /var/run/dpdk/spdk_pid812780 00:35:24.658 Removing: /var/run/dpdk/spdk_pid813131 00:35:24.658 Removing: /var/run/dpdk/spdk_pid813337 00:35:24.658 Removing: /var/run/dpdk/spdk_pid813368 00:35:24.658 Removing: /var/run/dpdk/spdk_pid818221 00:35:24.658 Removing: /var/run/dpdk/spdk_pid818882 00:35:24.658 Removing: /var/run/dpdk/spdk_pid820117 00:35:24.658 Removing: /var/run/dpdk/spdk_pid820410 00:35:24.658 Removing: /var/run/dpdk/spdk_pid829389 00:35:24.658 Removing: /var/run/dpdk/spdk_pid831851 00:35:24.658 Removing: /var/run/dpdk/spdk_pid832892 00:35:24.658 Removing: /var/run/dpdk/spdk_pid837785 00:35:24.658 Removing: /var/run/dpdk/spdk_pid839636 00:35:24.658 Removing: /var/run/dpdk/spdk_pid840762 00:35:24.658 Removing: /var/run/dpdk/spdk_pid845762 00:35:24.658 Removing: /var/run/dpdk/spdk_pid848517 00:35:24.658 Removing: /var/run/dpdk/spdk_pid849674 00:35:24.658 Removing: /var/run/dpdk/spdk_pid861559 00:35:24.658 Removing: /var/run/dpdk/spdk_pid864236 00:35:24.658 Removing: /var/run/dpdk/spdk_pid865952 00:35:24.658 Removing: /var/run/dpdk/spdk_pid877533 00:35:24.658 Removing: /var/run/dpdk/spdk_pid880212 00:35:24.658 Removing: /var/run/dpdk/spdk_pid881374 00:35:24.658 Removing: /var/run/dpdk/spdk_pid893000 00:35:24.658 Removing: /var/run/dpdk/spdk_pid896998 00:35:24.658 Removing: /var/run/dpdk/spdk_pid898301 00:35:24.658 Removing: /var/run/dpdk/spdk_pid911910 00:35:24.658 Removing: /var/run/dpdk/spdk_pid914885 00:35:24.658 Removing: /var/run/dpdk/spdk_pid916265 00:35:24.658 Removing: /var/run/dpdk/spdk_pid929380 00:35:24.658 Removing: /var/run/dpdk/spdk_pid932345 00:35:24.658 Removing: /var/run/dpdk/spdk_pid933519 00:35:24.658 Removing: /var/run/dpdk/spdk_pid947409 00:35:24.658 Removing: /var/run/dpdk/spdk_pid952412 00:35:24.658 Removing: /var/run/dpdk/spdk_pid953684 00:35:24.658 Removing: /var/run/dpdk/spdk_pid955257 00:35:24.658 Removing: /var/run/dpdk/spdk_pid959178 00:35:24.658 Removing: /var/run/dpdk/spdk_pid965214 00:35:24.658 Removing: /var/run/dpdk/spdk_pid968729 00:35:24.658 Removing: /var/run/dpdk/spdk_pid974379 00:35:24.658 Removing: /var/run/dpdk/spdk_pid978646 00:35:24.658 Removing: /var/run/dpdk/spdk_pid985130 00:35:24.658 Removing: /var/run/dpdk/spdk_pid988292 00:35:24.658 Removing: /var/run/dpdk/spdk_pid996153 00:35:24.658 Removing: /var/run/dpdk/spdk_pid999016 00:35:24.658 Clean 00:35:24.917 13:35:35 -- common/autotest_common.sh@1451 -- # return 0 00:35:24.917 13:35:35 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:35:24.917 13:35:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:24.917 13:35:35 -- common/autotest_common.sh@10 -- # set +x 00:35:24.917 13:35:35 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:35:24.917 13:35:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:24.917 13:35:35 -- common/autotest_common.sh@10 -- # set +x 00:35:24.917 13:35:35 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:35:24.917 13:35:35 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:35:24.917 13:35:35 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:35:24.917 13:35:35 -- spdk/autotest.sh@395 -- # hash lcov 00:35:24.917 13:35:35 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:24.917 13:35:35 -- spdk/autotest.sh@397 -- # hostname 00:35:24.917 13:35:35 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-19 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:35:25.176 geninfo: WARNING: invalid characters removed from testname! 00:35:51.744 13:36:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:35:54.282 13:36:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:35:56.220 13:36:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:35:58.754 13:36:09 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:01.288 13:36:11 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:03.821 13:36:13 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:36:06.356 13:36:16 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:06.356 13:36:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:36:06.356 13:36:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:06.356 13:36:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.356 13:36:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.356 13:36:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.356 13:36:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.356 13:36:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.356 13:36:16 -- paths/export.sh@5 -- $ export PATH 00:36:06.356 13:36:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.356 13:36:16 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:06.356 13:36:16 -- common/autobuild_common.sh@447 -- $ date +%s 00:36:06.356 13:36:16 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721907376.XXXXXX 00:36:06.356 13:36:16 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721907376.lz5oqQ 00:36:06.356 13:36:16 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:36:06.356 13:36:16 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:36:06.356 13:36:16 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:36:06.356 13:36:16 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:06.356 13:36:16 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:06.356 13:36:16 -- common/autobuild_common.sh@463 -- $ get_config_params 00:36:06.356 13:36:16 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:36:06.356 13:36:16 -- common/autotest_common.sh@10 -- $ set +x 00:36:06.356 13:36:16 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:36:06.356 13:36:16 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:36:06.356 13:36:16 -- pm/common@17 -- $ local monitor 00:36:06.356 13:36:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:06.356 13:36:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:06.356 13:36:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:06.356 13:36:16 -- pm/common@21 -- $ date +%s 00:36:06.356 13:36:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:06.356 13:36:16 -- pm/common@21 -- $ date +%s 00:36:06.356 13:36:16 -- pm/common@25 -- $ sleep 1 00:36:06.356 13:36:16 -- pm/common@21 -- $ date +%s 00:36:06.356 13:36:16 -- pm/common@21 -- $ date +%s 00:36:06.356 13:36:16 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721907376 00:36:06.356 13:36:16 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721907376 00:36:06.356 13:36:16 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721907376 00:36:06.356 13:36:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721907376 00:36:06.356 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721907376_collect-vmstat.pm.log 00:36:06.356 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721907376_collect-cpu-load.pm.log 00:36:06.356 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721907376_collect-cpu-temp.pm.log 00:36:06.356 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721907376_collect-bmc-pm.bmc.pm.log 00:36:07.294 13:36:17 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:36:07.294 13:36:17 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:36:07.294 13:36:17 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:07.294 13:36:17 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:07.294 13:36:17 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:07.294 13:36:17 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:07.294 13:36:17 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:07.294 13:36:17 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:07.294 13:36:17 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:36:07.294 13:36:17 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:07.294 13:36:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:07.294 13:36:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:07.294 13:36:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:07.294 13:36:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:07.294 13:36:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:07.294 13:36:17 -- pm/common@44 -- $ pid=1103557 00:36:07.294 13:36:17 -- pm/common@50 -- $ kill -TERM 1103557 00:36:07.294 13:36:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:07.294 13:36:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:07.294 13:36:17 -- pm/common@44 -- $ pid=1103559 00:36:07.294 13:36:17 -- pm/common@50 -- $ kill -TERM 1103559 00:36:07.294 13:36:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:07.294 13:36:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:07.294 13:36:17 -- pm/common@44 -- $ pid=1103561 00:36:07.294 13:36:17 -- pm/common@50 -- $ kill -TERM 1103561 00:36:07.294 13:36:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:07.294 13:36:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:07.294 13:36:17 -- pm/common@44 -- $ pid=1103583 00:36:07.294 13:36:17 -- pm/common@50 -- $ sudo -E kill -TERM 1103583 00:36:07.294 + [[ -n 638435 ]] 00:36:07.294 + sudo kill 638435 00:36:07.304 [Pipeline] } 00:36:07.323 [Pipeline] // stage 00:36:07.328 [Pipeline] } 00:36:07.345 [Pipeline] // timeout 00:36:07.350 [Pipeline] } 00:36:07.368 [Pipeline] // catchError 00:36:07.372 [Pipeline] } 00:36:07.387 [Pipeline] // wrap 00:36:07.392 [Pipeline] } 00:36:07.405 [Pipeline] // catchError 00:36:07.416 [Pipeline] stage 00:36:07.419 [Pipeline] { (Epilogue) 00:36:07.434 [Pipeline] catchError 00:36:07.436 [Pipeline] { 00:36:07.452 [Pipeline] echo 00:36:07.453 Cleanup processes 00:36:07.460 [Pipeline] sh 00:36:07.749 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:07.750 1103666 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:36:07.750 1104006 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:07.764 [Pipeline] sh 00:36:08.049 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:08.049 ++ grep -v 'sudo pgrep' 00:36:08.049 ++ awk '{print $1}' 00:36:08.049 + sudo kill -9 1103666 00:36:08.049 + true 00:36:08.061 [Pipeline] sh 00:36:08.345 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:08.345 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:36:16.481 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:36:21.763 [Pipeline] sh 00:36:22.044 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:22.044 Artifacts sizes are good 00:36:22.056 [Pipeline] archiveArtifacts 00:36:22.062 Archiving artifacts 00:36:22.215 [Pipeline] sh 00:36:22.498 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:36:22.514 [Pipeline] cleanWs 00:36:22.525 [WS-CLEANUP] Deleting project workspace... 00:36:22.525 [WS-CLEANUP] Deferred wipeout is used... 00:36:22.533 [WS-CLEANUP] done 00:36:22.535 [Pipeline] } 00:36:22.555 [Pipeline] // catchError 00:36:22.564 [Pipeline] sh 00:36:22.841 + logger -p user.info -t JENKINS-CI 00:36:22.850 [Pipeline] } 00:36:22.866 [Pipeline] // stage 00:36:22.872 [Pipeline] } 00:36:22.889 [Pipeline] // node 00:36:22.894 [Pipeline] End of Pipeline 00:36:22.922 Finished: SUCCESS